Category Archives: CKS

Create a Readonly RootFS Pod with writeable /tmp


title: “Create a Readonly RootFS Pod with writeable /tmp”
date: 2020-12-30T15:54:33
slug: create-a-readonly-rootfs-pod-with-writeable-tmp


apiVersion: apps/v1
kind: Deployment
metadata:
 namespace: team-purple
 name: immutable-deployment
 labels:
 app: immutable-deployment
spec:
 replicas: 1
 selector:
 matchLabels:
 app: immutable-deployment
 template:
 metadata:
 labels:
 app: immutable-deployment
 spec:
 containers:
 - image: busybox:1.32.0
 command: ['sh', '-c', 'tail -f /dev/null']
 imagePullPolicy: IfNotPresent
 name: busybox
 securityContext:
 readOnlyRootFilesystem: true
 volumeMounts:
 - mountPath: /tmp
 name: tmp
 volumes:
 - name: tmp
 emptyDir: {}
 restartPolicy: Always

Find User Action in AuditLog (get secret)


title: “Find User Action in AuditLog (get secret)”
date: 2020-12-30T15:43:21
slug: find-user-action-in-auditlog-get-secret


cat audit.log | grep "p.auster" | grep Secret | grep list | vim -

AND

cat audit.log | grep "p.auster" | grep Secret | grep get | vim -

Under “objectRef” and “name” you find the listed passwords

List Syscalls gerneated by Pods


title: “List Syscalls gerneated by Pods”
date: 2020-12-30T14:39:56
slug: list-syscalls-gerneated-by-pods


List Pods and their Nodes:

k get pod -owide
collector1-59ddbd6c7f-ffjjv ... cluster1-worker1

SSH to the Node and check which process is running inside the container:

docker ps | grep collector1
3e07aee08a48 registry.killer.sh:5000/collector1 "./collector1-process" .........

The Process is “collector1-process”, check for the PID (can be mor PIDS if there running more containers):

ps aux | grep collector1-process
root 10991 0.0 0.0 2412 760 ? Ssl 22:41 0:00 ./collector1-process
root 11150 0.0 0.0 2412 756 ? Ssl 22:41 0:00 ./collector1-process

Strace the PID:

strace -p 10991

List secrets with api curl from inside a Pod


title: “List secrets with api curl from inside a Pod”
date: 2020-12-30T14:04:51
slug: list-secrets-with-api-curl-from-inside-a-pod


curl -vvvk --header "Authorization: Bearer $TOKEN" https://$APISERVER:443/api/v1/namespaces/restricted/secrets
curl -vvvk --header "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" https://$KUBERNETES\_PORT\_443\_TCP\_ADDR:443
/api/v1/namespaces/squad-rtlplus-music/pods

System Hardening – Kernel Hardening Tools


title: “System Hardening – Kernel Hardening Tools”
date: 2020-12-16T13:56:46
slug: system-hardening-kernel-hardening-tools


Apparmor for Containers

apt-get install apparmor
apt-get install apparmor-utils

aa-status
aa-genprof curl

curl https://google.de
aa-logprof

Install a Profile with:

apparmor\_parser -q profile
apparmor\_status (check if its loaded)

Seccomp
Put the seccomp json file into “/var/lib/kubelet/seccomp/default.json”
(Can be downloaded from here: https://kubernetes.io/docs/tutorials/clusters/seccomp/)
Apply it with:

apiVersion: v1
kind: Pod
metadata:
 creationTimestamp: null
 labels:
 run: secure
 name: secure
spec:
 securityContext:
 seccompProfile:
 type: Localhost
 localhostProfile: default.json

Runtime Security – Auditing


title: “Runtime Security – Auditing”
date: 2020-12-16T08:15:32
slug: runtime-security-auditing


Enable / Configure Auditing

This Audit Rule Logs all Metadata

vi /etc/kubernetes/audit/policy.yaml

apiVersion: audit.k8s.io/v1
kind: Policy
rules:
- level: Metadata

Enable an Configure Auditing in kupe-api manifest:

vi /etc/kubernetes/manifests/kube-apiserver.yaml

 - --audit-policy-file=/etc/kubernetes/audit/policy.yaml # add
 - --audit-log-path=/var/log/kubernetes/audit.log # add
 - --audit-log-maxsize=500 # add
 - --audit-log-maxbackup=5 # add

 - mountPath: /etc/kubernetes/audit # add
 name: audit # add

 volumes:
 - hostPath: # add
 path: /var/log/kubernetes/audit.log # add
 type: DirectoryOrCreate # add
 name: audit # add

Some Policy Examples:
Dont log Anything from Stage RequestReceived (omitStages)

apiVersion: audit.k8s.io/v1 # This is required.
kind: Policy
# Don't generate audit events for all requests in RequestReceived stage.
omitStages:
 - "RequestReceived"

Behavioral Analytics at host and container level


title: “Behavioral Analytics at host and container level”
date: 2020-12-15T20:51:52
slug: behavioral-analytics-at-host-and-container-level


Strace:

Summara of Calls:

strace -cw ls /

Tracing a running process:

strace -p 2659

Follow forks/subprocesses

strace -p 2659 -f

Counting syscalls from running process (quit with ctrl+c)

strace -p 2659 -f -cw

List open Files from process 2659

ls /proc/2659/fd

List binary File readable

tail -f /var/lib/etcd/member/snap/db | strings

Show Env Vars from container (processes):

cat /proc/10287/environ

Audit with Falco:

install falco

curl -s https://falco.org/repo/falcosecurity-3672BA8F.asc | apt-key add -
echo "deb https://dl.bintray.com/falcosecurity/deb stable main" | tee -a /etc/apt/sources.list.d/falcosecurity.list
apt-get update -y
apt-get -y install linux-headers-$(uname -r)
apt-get install -y falco

docs about falco

https://v1-16.docs.kubernetes.io/docs/tasks/debug-application-cluster/falco

Falco is now Auditing Stuff, check it with:

tail -f /var/log/syslog | grep falco
 Find a Pod running image nginx which creates unwanted package management processes inside its container.

 Find a Pod running image httpd which modifies /etc/passwd.

Save the Falco logs for case 1 under /opt/course/2/falco.log in format [time][container-id][container-name][user-name]. No other information should be in any line. Collect the logs for at least 30 seconds.

Afterwards remove the threads (both 1 and 2) by scaling the replicas of the Deployments that control the offending Pods down to 0
docker ps | grep 6cb6a5ae8c21
kubectl scale --replicas=0 -n team-purple deployment/rating-service

systemctl stop falco
falco | grep “Package management”
cat out.log | cut -d” ” -f 9 > /opt/course/2/falco.log

Supply Chain Security – Secure Supply Chain


title: “Supply Chain Security – Secure Supply Chain”
date: 2020-12-13T21:57:57
slug: supply-chain-security-secure-supply-chain


Pin Image Version to Digest Hash

k get pod -n kube-system kube-controller-manager-cks-master -oyaml | grep imageID
 imageID: k8s.gcr.io/kube-controller-manager@sha256:00ccc3a5735e82d53bc26054d594a942fae64620a6f84018c057a519ba7ed1dc

Use the ImageID as Iamgelocation in the Pod Manifest

Whitelist Image Registries
Create ConstraintTemplate

apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
 name: k8strustedimages
spec:
 crd:
 spec:
 names:
 kind: K8sTrustedImages
 targets:
 - target: admission.k8s.gatekeeper.sh
 rego: |
 package k8strustedimages
 violation[{"msg": msg}] {
 image := input.review.object.spec.containers[\_].image
 not startswith(image, "docker.io/")
 not startswith(image, "k8s.gcr.io/")
 msg := "not trusted image!"
 }

Apply it to all Pods:

apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sTrustedImages
metadata:
 name: pod-trusted-images
spec:
 match:
 kinds:
 - apiGroups: [""]
 kinds: ["Pod"]

Create ImagePolicyWebhook

in /etc/kubernetes/manifests/kube-apiserver.yaml add “- –admission-control-config-file=/etc/kubernetes/admission/admission_config.yaml” and ImagePolicyWebhook

spec:
 containers:
 - command:
 - kube-apiserver
 - --admission-control-config-file=/etc/kubernetes/admission/admission\_config.yaml
 - --advertise-address=10.156.0.6
 - --allow-privileged=true
 - --authorization-mode=Node,RBAC
 - --client-ca-file=/etc/kubernetes/pki/ca.crt
 - --enable-admission-plugins=NodeRestriction,ImagePolicyWebhook

Now there is an error in apiserver log:

2020-12-14T08:16:28.11843166Z stderr F Error: failed to initialize admission: couldn't init admission plugin "ImagePolicyWebhook": no config specified

Specify a Configuration:

mkdir /etc/kubernetes/admission
vi /etc/kubernetes/admission/admission\_config.yaml

apiVersion: apiserver.config.k8s.io/v1
kind: AdmissionConfiguration
plugins:
 - name: ImagePolicyWebhook
 configuration:
 imagePolicy:
 kubeConfigFile: /etc/kubernetes/admission/kubeconf
 allowTTL: 50
 denyTTL: 50
 retryBackoff: 500
 defaultAllow: false

vi /etc/kubernetes/admission/kubeconf

apiVersion: v1
kind: Config

# clusters refers to the remote service.
clusters:
- cluster:
 certificate-authority: /etc/kubernetes/admission/external-cert.pem # CA for verifying the remote service.
 server: https://external-service:1234/check-image # URL of remote service to query. Must use 'https'.
 name: image-checker

contexts:
- context:
 cluster: image-checker
 user: api-server
 name: image-checker
current-context: image-checker
preferences: {}

# users refers to the API server's webhook configuration.
users:
- name: api-server
 user:
 client-certificate: /etc/kubernetes/admission/apiserver-client-cert.pem # cert for the webhook admission controller to use
 client-key: /etc/kubernetes/admission/apiserver-client-key.pem # key matching the cert

Create the desired Certificates for the external image checker

Mount the admission Directory into the Pod:

vi kube-apiserver.yaml
 - mountPath: /etc/kubernetes/admission
 name: k8s-admission
 readOnly: true

 - hostPath:
 path: /etc/kubernetes/admission
 type: DirectoryOrCreate
 name: k8s-admission

Image Vulnerability Scanning


title: “Image Vulnerability Scanning”
date: 2020-12-13T21:14:59
slug: image-vulnerability-scanning


Scan Images with trivy

docker run ghcr.io/aquasecurity/trivy:latest image nginx:latest

(Better provide a version than latest)