Category Archives: CKS

Cluster Hardening – RBAC


title: “Cluster Hardening – RBAC”
date: 2020-12-09T22:19:34
slug: cluster-hardening-rbac


Create roles/clusterroles and rolebinding/clusterrolebinding

Check it with (in a specified namespace)

k -n red auth can-i get secrets --as jane

in all namespaces

k auth can-i get secrets -A --as jane

Create a User Certificate
Create CSR

openssl req -new --newkey rsa:4096 -keyout xforze.key -out xforze.csr -nodes

Put the csr xforze.csr “base64 -w 0” encoded in the File and set the name:

apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
 name: xforze
spec:
 groups:
 - system:authenticated
 request: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTdmJXVXRVM1JoZEdVeApJVEFmQmdOVkJBb01HRWx1ZEdWeW......
 signerName: kubernetes.io/kube-apiserver-client
 usages:
 - client auth

List Sertificatesigningrequests

k get certificatesigningrequests

Approve certificate with:

k certificate approve xforze

Get the Certificate:

k get certificatesigningrequests xforze -o yaml

decode it with base64 -d
OR:

k get certificatesigningrequests xforze -o jsonpath='{.status.certificate}' | base64 -d
k get certificatesigningrequests xforze -o jsonpath='{.status.certificate}' | base64 -d > xforze.crt

Set the user in kubeconfig

k config set-credentials xforze --client-key=xforze.key --client-certificate=xforze.crt --embed-certs

Add a new Context

k config set-context xforze --user=xforze --cluster=kubernetes

Use the new Context

k config use-config xforze

Cluster Setup – Verify Platform Binaries


title: “Cluster Setup – Verify Platform Binaries”
date: 2020-12-09T21:52:34
slug: cluster-setup-verify-platform-binaries


Get the Bianry for you K8S Version
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.19.md#downloads-for-v1193

Compare the Checksum from the Download Page and the Output from:

# sha512sum kubernetes-server-linux-amd64.tar.gz
ebe86d27275a3ed1208b6db99a65cc9cf24b60fd3184b9f0fb769bc4b1b162dfd8330333fbe4a18df765a39211595101d1bb3f8671b411cb7a58a6cb8ced58b2 kubernetes-server-linux-amd64.tar.gz

Extract the file and compare the chksums from the kubernetes binaries with the binaries inside the container.
The whole Container filesystem can be extracted to the local disc with:

docker cp 6757b308917d:/ container-fs

Cluster Setup – CIS Benchmarks


title: “Cluster Setup – CIS Benchmarks”
date: 2020-12-09T21:44:09
slug: cluster-setup-cis-benchmarks


Get the PDF Document
https://github.com/cismirror/old-benchmarks-archive/blob/master/CIS_Kubernetes_Benchmark_v1.6.0.pdf

Run kube-bench for master and node (adapt the version):
https://github.com/aquasecurity/kube-bench

docker run --pid=host -v /etc:/etc:ro -v /var:/var:ro -t aquasec/kube-bench:latest master --version 1.19
docker run --pid=host -v /etc:/etc:ro -v /var:/var:ro -t aquasec/kube-bench:latest node--version 1.19

Check Docker Bench as well:
https://github.com/docker/docker-bench-security

Cluster Setup – Node Metadata Protection


title: “Cluster Setup – Node Metadata Protection”
date: 2020-12-09T21:07:02
slug: cluster-setup-node-metadata-protection


Protect Pods to query the meatada server from the cloud Provider
(curl “http://metadata.google.internal/computeMetadata/v1/instance/disks/0” -H “Metadata-Flavor: Google”)

Get the IP Address from the metadata server to use it in the deny network policy

~# ping metadata.google.internal
PING metadata.google.internal (169.254.169.254) 56(84) bytes of data.

Create a network policy to deny traffic from all pods to the metadata server

# all pods in namespace cannot access metadata endpoint
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
 name: cloud-metadata-deny
 namespace: default
spec:
 podSelector: {}
 policyTypes:
 - Egress
 egress:
 - to:
 - ipBlock:
 cidr: 0.0.0.0/0
 except:
 - 169.254.169.254/32

Create an allow rule which applies to Pods with Label: “role: metadata-accessor”

# only pods with label are allowed to access metadata endpoint
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
 name: cloud-metadata-allow
 namespace: default
spec:
 podSelector:
 matchLabels:
 role: metadata-accessor
 policyTypes:
 - Egress
 egress:
 - to:
 - ipBlock:
 cidr: 169.254.169.254/32

Add the Label “role=metadata-accessor” to a Pod

k label pod nginx role=metadata-accessor

Cluster Setup – Secure Ingress


title: “Cluster Setup – Secure Ingress”
date: 2020-12-08T21:56:20
slug: cluster-setup-secure-ingress


Install nginx Ingress:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.41.2/deploy/static/provider/cloud/deploy.yaml

Create an Ingress:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
 name: secure-ingress
 annotations:
 nginx.ingress.kubernetes.io/rewrite-target: /
spec:
 rules:
 - http:
 paths:
 - path: /service1
 pathType: Prefix
 backend:
 service:
 name: service1
 port:
 number: 80
 - path: /service2
 pathType: Prefix
 backend:
 service:
 name: service2
 port:
 number: 80

Expose two pods:

k expose pod pod1 --port 80 --name service1
k expose pod pod2 --port 80 --name service2

Create Certificate:

openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -nodes

Create Secret from Certificate:

k create secret tls secure-ingress --cert=cert.pem --key=key.pem

Add TLS to ingress

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
 name: secure-ingress
 annotations:
 nginx.ingress.kubernetes.io/rewrite-target: /
spec:
 tls:
 - hosts:
 - https-example.foo.com
 secretName: secure-ingress
 rules:
 - http:
 paths:
 - path: /service1
 pathType: Prefix
 backend:
 service:
 name: service1
 port:
 number: 80
 - path: /service2
 pathType: Prefix
 backend:
 service:
 name: service2
 port:
 number: 80

Dashboard Config


title: “Dashboard Config”
date: 2020-12-08T21:54:10
slug: dashboard-config


Arguments: https://github.com/kubernetes/dashboard/blob/master/docs/common/dashboard-arguments.md
Access Control: https://github.com/kubernetes/dashboard/tree/master/docs/user/access-control

k edit deployments -n kubernetes-dashboard kubernetes-dashboard

network Policies


title: “network Policies”
date: 2020-12-08T20:57:13
slug: create-a-default-deny-policy


podSelector: This selects particular Pods in the same namespace as the NetworkPolicy which should be allowed as ingress sources or egress destinations.
namespaceSelector: This selects particular namespaces for which all Pods should be allowed as ingress sources or egress destinations.
namespaceSelector and podSelector: A single to/from entry that specifies both namespaceSelector and podSelector selects particular Pods within particular namespaces. Be careful to use correct YAML syntax; this policy:

Namespace AND Pod Selector

 - from:
 - namespaceSelector:
 matchLabels:
 user: alice
 podSelector:
 matchLabels:
 role: client

Namespace OR Pod Selector

 - from:
 - namespaceSelector:
 matchLabels:
 user: alice
 - podSelector:
 matchLabels:
 role: client

Create a default deny policy

cat default-deny.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
 name: default-deny
 namespace: default
spec:
 podSelector: {}
 policyTypes:
 - Ingress
 - Egress

Allow Traffic from Pod1 (label: “run: frontend”) to Pod2 (label: “run: backend”)
This Policy is needed to allow outgoing Traffic from Pod1 (only to Pods with label “run: backend”)

cat frontend.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
 name: frontend
 namespace: default
spec:
 podSelector:
 matchLabels:
 run: frontend
 policyTypes:
 - Egress
 egress:
 - to:
 - podSelector:
 matchLabels:
 run: backend

This Policy is needed to allow Incoming Traffic on Pod2 (only from Pods with label “run: frontend”)

cat backend.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
 name: backend
 namespace: default
spec:
 podSelector:
 matchLabels:
 run: backend
 policyTypes:
 - Ingress
 ingress:
 - from:
 - podSelector:
 matchLabels:
 run: frontend

Allow Connection from Pod2 (label: run: backend) to the Namespace cassandra (Namespace with Label “ns: cassandra”)
(It works as long no default deny Policy is applied to ns cassandra)

cat backend.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
 name: backend
 namespace: default
spec:
 podSelector:
 matchLabels:
 run: backend
 policyTypes:
 - Ingress
 - Egress
 ingress:
 - from:
 - podSelector:
 matchLabels:
 run: frontend
 egress:
 - to:
 - namespaceSelector:
 matchLabels:
 ns: cassandra

Allow Incoming Traffic from a Namespace with Label “id: default” to Pod with label “run: cassandra” in namespace cassandra

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
 name: cassandra
 namespace: cassandra
spec:
 podSelector:
 matchLabels:
 run: cassandra
 policyTypes:
 - Ingress
 ingress:
 - from:
 - namespaceSelector:
 matchLabels:
 ns: default