Manage Kubernetes Secrets


title: “Manage Kubernetes Secrets”
date: 2020-12-11T22:36:47
slug: manage-kubernetes-secrets


Encrypting your data in ETCD

Create a new encryption config file:

head -c 32 /dev/urandom | base64
vi /etc/kubernetes/etcd/ec.yaml

apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
resources:
 - resources:
 - secrets
 providers:
 - aescbc:
 keys:
 - name: key1
 secret: <----- Password must be 16, 32 or 64 Bit long
 - identity: {}

Add a kube-api Parameter:

vi /etc/kubernetes/manifests/kube-apiserver.yaml
--encryption-provider-config=/etc/kubernetes/etcd/ec.yaml

Add under Volumemount:
 - mountPath: /etc/kubernetes/etcd
 name: etcd
 readOnly: true

Add under Volumes:
 - hostPath:
 path: /etc/kubernetes/etcd
 type: DirectoryOrCreate
 name: etcd

Read a secret from etcd

ETCDCTL\_API=3 etcdctl --cert /etc/kubernetes/pki/apiserver-etcd-client.crt --key /etc/kubernetes/pki/apiserver-etcd-client.key --cacert /etc/kubernetes/pki/etcd/ca.crt get /registry/secrets/default/secure-ingress

Rewrite a secred (for encoding)

k get secret secure-ingress -o yaml|k replace -f -

Rewrite all Secrets, after this, remove the “- identity: {}” Provider:

kubectl get secrets --all-namespaces -o json | kubectl replace -f -

Create Buildconfig and trigger from github


title: “Create Buildconfig and trigger from github”
date: 2020-12-11T10:37:15
slug: create-buildconfig-and-trigger-from-github


kind: BuildConfig
apiVersion: build.openshift.io/v1
metadata:
 name: sec-git
 namespace: sec
 selfLink: /apis/build.openshift.io/v1/namespaces/sec/buildconfigs/sec-git
 uid: 176a09a4-324a-4208-847a-d5fd4c409a18
 resourceVersion: '6483992'
 creationTimestamp: '2020-12-11T08:23:22Z'
 labels:
 app: sec-git
 app.kubernetes.io/component: sec-git
 app.kubernetes.io/instance: sec-git
 app.kubernetes.io/name: php
 app.kubernetes.io/part-of: sec
 app.openshift.io/runtime: php
 app.openshift.io/runtime-version: 7.3-ubi7
 annotations:
 app.openshift.io/vcs-ref: master
 app.openshift.io/vcs-uri: 'https://github.com/xforze/sec.git'
 openshift.io/generated-by: OpenShiftWebConsole
 managedFields:
 - manager: openshift-apiserver
 operation: Update
 apiVersion: build.openshift.io/v1
 time: '2020-12-11T10:10:26Z'
 fieldsType: FieldsV1
 fieldsV1:
 'f:status':
 'f:lastVersion': {}
 - manager: Mozilla
 operation: Update
 apiVersion: build.openshift.io/v1
 time: '2020-12-11T10:34:43Z'
 fieldsType: FieldsV1
 fieldsV1:
 'f:metadata':
 'f:annotations':
 .: {}
 'f:app.openshift.io/vcs-ref': {}
 'f:app.openshift.io/vcs-uri': {}
 'f:openshift.io/generated-by': {}
 'f:labels':
 .: {}
 'f:app': {}
 'f:app.kubernetes.io/component': {}
 'f:app.kubernetes.io/instance': {}
 'f:app.kubernetes.io/name': {}
 'f:app.kubernetes.io/part-of': {}
 'f:app.openshift.io/runtime': {}
 'f:app.openshift.io/runtime-version': {}
 'f:spec':
 'f:output':
 'f:to':
 .: {}
 'f:kind': {}
 'f:name': {}
 'f:runPolicy': {}
 'f:source':
 'f:contextDir': {}
 'f:git':
 .: {}
 'f:uri': {}
 'f:type': {}
 'f:strategy':
 'f:sourceStrategy':
 .: {}
 'f:from':
 .: {}
 'f:kind': {}
 'f:name': {}
 'f:namespace': {}
 'f:type': {}
 'f:triggers': {}
spec:
 triggers:
 - type: GitHub
 github:
 secretReference:
 name: github
 - type: GitHub
 github:
 secretReference:
 name: sec-git-github-webhook-secret
 - type: ImageChange
 imageChange:
 lastTriggeredImageID: >-
 image-registry.openshift-image-registry.svc:5000/openshift/php@sha256:4173a6d7361c1d5d1154b0d24580c4abd2954c9116b9f91e79f297689e9fd9f9
 - type: ConfigChange
 runPolicy: Serial
 source:
 type: Git
 git:
 uri: 'https://github.com/xforze/sec.git'
 contextDir: /
 strategy:
 type: Source
 sourceStrategy:
 from:
 kind: ImageStreamTag
 namespace: openshift
 name: 'php:7.3-ubi7'
 output:
 to:
 kind: ImageStreamTag
 name: 'sec-git:latest'
 resources: {}
 postCommit: {}
 nodeSelector: null
 successfulBuildsHistoryLimit: 5
 failedBuildsHistoryLimit: 5

Create the github Secret

oc create secret generic github --from-literal=WebHookSecretKey=adfawertwetwer

Get the Webhook Url:

oc describe bc sec

Replace with the Secret Value:
https://api.ocp4-thasanger.paas.pop.noris.de:6443/apis/build.openshift.io/v1/namespaces/sec/buildconfigs/sec-git/webhooks//github

Activate Image Trigger in Deploymentconfig:

oc set triggers dc/sec --auto

WARNING: No container image registry has been configured with the server. Automatic builds and deployments may not function.


title: “WARNING: No container image registry has been configured with the server. Automatic builds and deployments may not function.”
date: 2020-12-11T09:04:19
slug: warning-no-container-image-registry-has-been-configured-with-the-server-automatic-builds-and-deployments-may-not-function


If this error message appears, check if the image registry is in managed state:

oc edit configs.imageregistry.operator.openshift.io
apiVersion: imageregistry.operator.openshift.io/v1
kind: Config
metadata:
 creationTimestamp:
 finalizers:
 - imageregistry.operator.openshift.io/finalizer
 generation: 3
 name: cluster
 resourceVersion:
 selfLink:
spec:
 readOnly: false
 disableRedirect: false
 requests:
 read:
 maxInQueue: 0
 maxRunning: 0
 maxWaitInQueue: 0s
 write:
 maxInQueue: 0
 maxRunning: 0
 maxWaitInQueue: 0s
 defaultRoute: true
 managementState: Managed

To let the registry use local storage

oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}'

Cluster Hardening – Upgrade Kubernetes


title: “Cluster Hardening – Upgrade Kubernetes”
date: 2020-12-10T21:54:21
slug: cluster-hardening-upgrade-kubernetes


kubectl drain
Do the upgrade
kubectl uncordon

Upgrade the Cluster to one minor Version up

Masternode:

k drain xxxxx --ignore-daemonsets
apt-cache show kubeadm | grep 1.19 (next Version)
apt install kubeadm=xxxxx kubelet=xxxx kubectl=xxxxx
kubeadm upgrade plan
kubeadm upgrade apply 1.19.3 (suggested command)
k uncordon xxxx

Workernode:

k drain xxxxx --ignore-daemonsets
apt-cache show kubeadm | grep 1.19
apt install kubeadm=xxxxx kubelet=xxxx kubectl=xxxxx
systemctl restart kubelet
k uncordon xxxx

Cluster Hardening – Restrict API Access


title: “Cluster Hardening – Restrict API Access”
date: 2020-12-10T21:11:57
slug: cluster-hardening-restrict-api-access


Disable anonymous Access (- –anonymous-auth= false)
vi /etc/kubernetes/manifests/kube-apiserver.yaml

apiVersion: v1
kind: Pod
metadata:
 annotations:
 kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 10.156.0.2:6443
 creationTimestamp: null
 labels:
 component: kube-apiserver
 tier: control-plane
 name: kube-apiserver
 namespace: kube-system
spec:
 containers:
 - command:
 - kube-apiserver
 - --anonymous-auth=false
 - --advertise-address=10.156.0.2
 - --allow-privileged=true
 - --authorization-mode=Node,RBAC

Disable anonymous auth: – –anonymous-auth=false
Disable Insecure Port in /etc/kubernetes/manifests/kube-apiserver.yaml by setting the port to 0
Disable the Node Port by comment out: # – –kubernetes-service-node-port=31000 # delete or set to 0

Cluster Hardening – RBAC


title: “Cluster Hardening – RBAC”
date: 2020-12-09T22:19:34
slug: cluster-hardening-rbac


Create roles/clusterroles and rolebinding/clusterrolebinding

Check it with (in a specified namespace)

k -n red auth can-i get secrets --as jane

in all namespaces

k auth can-i get secrets -A --as jane

Create a User Certificate
Create CSR

openssl req -new --newkey rsa:4096 -keyout xforze.key -out xforze.csr -nodes

Put the csr xforze.csr “base64 -w 0” encoded in the File and set the name:

apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
 name: xforze
spec:
 groups:
 - system:authenticated
 request: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTdmJXVXRVM1JoZEdVeApJVEFmQmdOVkJBb01HRWx1ZEdWeW......
 signerName: kubernetes.io/kube-apiserver-client
 usages:
 - client auth

List Sertificatesigningrequests

k get certificatesigningrequests

Approve certificate with:

k certificate approve xforze

Get the Certificate:

k get certificatesigningrequests xforze -o yaml

decode it with base64 -d
OR:

k get certificatesigningrequests xforze -o jsonpath='{.status.certificate}' | base64 -d
k get certificatesigningrequests xforze -o jsonpath='{.status.certificate}' | base64 -d > xforze.crt

Set the user in kubeconfig

k config set-credentials xforze --client-key=xforze.key --client-certificate=xforze.crt --embed-certs

Add a new Context

k config set-context xforze --user=xforze --cluster=kubernetes

Use the new Context

k config use-config xforze

Cluster Setup – Verify Platform Binaries


title: “Cluster Setup – Verify Platform Binaries”
date: 2020-12-09T21:52:34
slug: cluster-setup-verify-platform-binaries


Get the Bianry for you K8S Version
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.19.md#downloads-for-v1193

Compare the Checksum from the Download Page and the Output from:

# sha512sum kubernetes-server-linux-amd64.tar.gz
ebe86d27275a3ed1208b6db99a65cc9cf24b60fd3184b9f0fb769bc4b1b162dfd8330333fbe4a18df765a39211595101d1bb3f8671b411cb7a58a6cb8ced58b2 kubernetes-server-linux-amd64.tar.gz

Extract the file and compare the chksums from the kubernetes binaries with the binaries inside the container.
The whole Container filesystem can be extracted to the local disc with:

docker cp 6757b308917d:/ container-fs

Cluster Setup – CIS Benchmarks


title: “Cluster Setup – CIS Benchmarks”
date: 2020-12-09T21:44:09
slug: cluster-setup-cis-benchmarks


Get the PDF Document
https://github.com/cismirror/old-benchmarks-archive/blob/master/CIS_Kubernetes_Benchmark_v1.6.0.pdf

Run kube-bench for master and node (adapt the version):
https://github.com/aquasecurity/kube-bench

docker run --pid=host -v /etc:/etc:ro -v /var:/var:ro -t aquasec/kube-bench:latest master --version 1.19
docker run --pid=host -v /etc:/etc:ro -v /var:/var:ro -t aquasec/kube-bench:latest node--version 1.19

Check Docker Bench as well:
https://github.com/docker/docker-bench-security

Cluster Setup – Node Metadata Protection


title: “Cluster Setup – Node Metadata Protection”
date: 2020-12-09T21:07:02
slug: cluster-setup-node-metadata-protection


Protect Pods to query the meatada server from the cloud Provider
(curl “http://metadata.google.internal/computeMetadata/v1/instance/disks/0” -H “Metadata-Flavor: Google”)

Get the IP Address from the metadata server to use it in the deny network policy

~# ping metadata.google.internal
PING metadata.google.internal (169.254.169.254) 56(84) bytes of data.

Create a network policy to deny traffic from all pods to the metadata server

# all pods in namespace cannot access metadata endpoint
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
 name: cloud-metadata-deny
 namespace: default
spec:
 podSelector: {}
 policyTypes:
 - Egress
 egress:
 - to:
 - ipBlock:
 cidr: 0.0.0.0/0
 except:
 - 169.254.169.254/32

Create an allow rule which applies to Pods with Label: “role: metadata-accessor”

# only pods with label are allowed to access metadata endpoint
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
 name: cloud-metadata-allow
 namespace: default
spec:
 podSelector:
 matchLabels:
 role: metadata-accessor
 policyTypes:
 - Egress
 egress:
 - to:
 - ipBlock:
 cidr: 169.254.169.254/32

Add the Label “role=metadata-accessor” to a Pod

k label pod nginx role=metadata-accessor