Author Archives: admin

Behavioral Analytics at host and container level


title: “Behavioral Analytics at host and container level”
date: 2020-12-15T20:51:52
slug: behavioral-analytics-at-host-and-container-level


Strace:

Summara of Calls:

strace -cw ls /

Tracing a running process:

strace -p 2659

Follow forks/subprocesses

strace -p 2659 -f

Counting syscalls from running process (quit with ctrl+c)

strace -p 2659 -f -cw

List open Files from process 2659

ls /proc/2659/fd

List binary File readable

tail -f /var/lib/etcd/member/snap/db | strings

Show Env Vars from container (processes):

cat /proc/10287/environ

Audit with Falco:

install falco

curl -s https://falco.org/repo/falcosecurity-3672BA8F.asc | apt-key add -
echo "deb https://dl.bintray.com/falcosecurity/deb stable main" | tee -a /etc/apt/sources.list.d/falcosecurity.list
apt-get update -y
apt-get -y install linux-headers-$(uname -r)
apt-get install -y falco

docs about falco

https://v1-16.docs.kubernetes.io/docs/tasks/debug-application-cluster/falco

Falco is now Auditing Stuff, check it with:

tail -f /var/log/syslog | grep falco
 Find a Pod running image nginx which creates unwanted package management processes inside its container.

 Find a Pod running image httpd which modifies /etc/passwd.

Save the Falco logs for case 1 under /opt/course/2/falco.log in format [time][container-id][container-name][user-name]. No other information should be in any line. Collect the logs for at least 30 seconds.

Afterwards remove the threads (both 1 and 2) by scaling the replicas of the Deployments that control the offending Pods down to 0
docker ps | grep 6cb6a5ae8c21
kubectl scale --replicas=0 -n team-purple deployment/rating-service

systemctl stop falco
falco | grep “Package management”
cat out.log | cut -d” ” -f 9 > /opt/course/2/falco.log

Supply Chain Security – Secure Supply Chain


title: “Supply Chain Security – Secure Supply Chain”
date: 2020-12-13T21:57:57
slug: supply-chain-security-secure-supply-chain


Pin Image Version to Digest Hash

k get pod -n kube-system kube-controller-manager-cks-master -oyaml | grep imageID
 imageID: k8s.gcr.io/kube-controller-manager@sha256:00ccc3a5735e82d53bc26054d594a942fae64620a6f84018c057a519ba7ed1dc

Use the ImageID as Iamgelocation in the Pod Manifest

Whitelist Image Registries
Create ConstraintTemplate

apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
 name: k8strustedimages
spec:
 crd:
 spec:
 names:
 kind: K8sTrustedImages
 targets:
 - target: admission.k8s.gatekeeper.sh
 rego: |
 package k8strustedimages
 violation[{"msg": msg}] {
 image := input.review.object.spec.containers[\_].image
 not startswith(image, "docker.io/")
 not startswith(image, "k8s.gcr.io/")
 msg := "not trusted image!"
 }

Apply it to all Pods:

apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sTrustedImages
metadata:
 name: pod-trusted-images
spec:
 match:
 kinds:
 - apiGroups: [""]
 kinds: ["Pod"]

Create ImagePolicyWebhook

in /etc/kubernetes/manifests/kube-apiserver.yaml add “- –admission-control-config-file=/etc/kubernetes/admission/admission_config.yaml” and ImagePolicyWebhook

spec:
 containers:
 - command:
 - kube-apiserver
 - --admission-control-config-file=/etc/kubernetes/admission/admission\_config.yaml
 - --advertise-address=10.156.0.6
 - --allow-privileged=true
 - --authorization-mode=Node,RBAC
 - --client-ca-file=/etc/kubernetes/pki/ca.crt
 - --enable-admission-plugins=NodeRestriction,ImagePolicyWebhook

Now there is an error in apiserver log:

2020-12-14T08:16:28.11843166Z stderr F Error: failed to initialize admission: couldn't init admission plugin "ImagePolicyWebhook": no config specified

Specify a Configuration:

mkdir /etc/kubernetes/admission
vi /etc/kubernetes/admission/admission\_config.yaml

apiVersion: apiserver.config.k8s.io/v1
kind: AdmissionConfiguration
plugins:
 - name: ImagePolicyWebhook
 configuration:
 imagePolicy:
 kubeConfigFile: /etc/kubernetes/admission/kubeconf
 allowTTL: 50
 denyTTL: 50
 retryBackoff: 500
 defaultAllow: false

vi /etc/kubernetes/admission/kubeconf

apiVersion: v1
kind: Config

# clusters refers to the remote service.
clusters:
- cluster:
 certificate-authority: /etc/kubernetes/admission/external-cert.pem # CA for verifying the remote service.
 server: https://external-service:1234/check-image # URL of remote service to query. Must use 'https'.
 name: image-checker

contexts:
- context:
 cluster: image-checker
 user: api-server
 name: image-checker
current-context: image-checker
preferences: {}

# users refers to the API server's webhook configuration.
users:
- name: api-server
 user:
 client-certificate: /etc/kubernetes/admission/apiserver-client-cert.pem # cert for the webhook admission controller to use
 client-key: /etc/kubernetes/admission/apiserver-client-key.pem # key matching the cert

Create the desired Certificates for the external image checker

Mount the admission Directory into the Pod:

vi kube-apiserver.yaml
 - mountPath: /etc/kubernetes/admission
 name: k8s-admission
 readOnly: true

 - hostPath:
 path: /etc/kubernetes/admission
 type: DirectoryOrCreate
 name: k8s-admission

Image Vulnerability Scanning


title: “Image Vulnerability Scanning”
date: 2020-12-13T21:14:59
slug: image-vulnerability-scanning


Scan Images with trivy

docker run ghcr.io/aquasecurity/trivy:latest image nginx:latest

(Better provide a version than latest)

Supply Chain Security – Static Analysis


title: “Supply Chain Security – Static Analysis”
date: 2020-12-13T20:37:33
slug: supply-chain-security-static-analysis


Check your yaml file with Kubesec:

docker run -i kubesec/kubesec:512c5e0 scan /dev/stdin < pod.yaml

In OPA Conftest: Container are not allowed to run as root
Create a policy File:

$ cat policy/deployment.rego
# from https://www.conftest.dev
package main

deny[msg] {
 input.kind = "Deployment"
 not input.spec.template.spec.securityContext.runAsNonRoot = true
 msg = "Containers must not run as root"
}

deny[msg] {
 input.kind = "Deployment"
 not input.spec.selector.matchLabels.app
 msg = "Containers must provide app label for pod selectors"
}
docker run --rm -v $(pwd):/project instrumenta/conftest test deploy.yaml

Reduce Image Footprint with Mulit-Stage


title: “Reduce Image Footprint with Mulit-Stage”
date: 2020-12-13T20:14:19
slug: reduce-image-footprint-with-mulit-stage


Use multi-stage builds

With multi-stage builds, you use multiple FROM statements in your Dockerfile. Each FROM instruction can use a different base, and each of them begins a new stage of the build. You can selectively copy artifacts from one stage to another, leaving behind everything you don’t want in the final image. To show how this works, let’s adapt the Dockerfile from the previous section to use multi-stage builds.

Dockerfile:

# build container stage 1
FROM ubuntu
ARG DEBIAN\_FRONTEND=noninteractive
RUN apt-get update && apt-get install -y golang-go
COPY app.go .
RUN CGO\_ENABLED=0 go build app.go

# app container stage 2
FROM alpine
COPY --from=0 /app .
CMD ["./app"]

Use specific Versions:

# build container stage 1
FROM ubuntu
ARG DEBIAN\_FRONTEND=noninteractive
RUN apt-get update && apt-get install -y golang-go
COPY app.go .
RUN CGO\_ENABLED=0 go build app.go

# app container stage 2
FROM alpine:3.11.6
COPY --from=0 /app .
CMD ["./app"]

Dont run as root

# build container stage 1
FROM ubuntu:20.04
ARG DEBIAN\_FRONTEND=noninteractive
RUN apt-get update && apt-get install -y golang-go=2:1.13~1ubuntu2
COPY app.go .
RUN pwd
RUN CGO\_ENABLED=0 go build app.go

# app container stage 2
FROM alpine:3.12.0
RUN addgroup -S appgroup && adduser -S appuser -G appgroup -h /home/appuser
COPY --from=0 /app /home/appuser/
USER appuser
CMD ["/home/appuser/app"]

Make File Systems Read only (chmod a-w /etc)

# build container stage 1
FROM ubuntu:20.04
ARG DEBIAN\_FRONTEND=noninteractive
RUN apt-get update && apt-get install -y golang-go=2:1.13~1ubuntu2
COPY app.go .
RUN pwd
RUN CGO\_ENABLED=0 go build app.go

# app container stage 2
FROM alpine:3.12.0
RUN chmod a-r /etc
RUN addgroup -S appgroup && adduser -S appuser -G appgroup -h /home/appuser
COPY --from=0 /app /home/appuser/
USER appuser
CMD ["/home/appuser/app"]

Remove Shell Access

# build container stage 1
FROM ubuntu:20.04
ARG DEBIAN\_FRONTEND=noninteractive
RUN apt-get update && apt-get install -y golang-go=2:1.13~1ubuntu2
COPY app.go .
RUN pwd
RUN CGO\_ENABLED=0 go build app.go

# app container stage 2
FROM alpine:3.12.0
RUN addgroup -S appgroup && adduser -S appuser -G appgroup -h /home/appuser
RUN rm -rf /bin/\*
COPY --from=0 /app /home/appuser/
USER appuser
CMD ["/home/appuser/app"]

Open Policy Agent (OPA)


title: “Open Policy Agent (OPA)”
date: 2020-12-13T18:08:39
slug: open-policy-agent-opa


Install OPA: kubectl create -f https://raw.githubusercontent.com/killer-sh/cks-course-environment/master/course-content/opa/gatekeeper.yaml
Create DenyAll Policy for Pods: https://github.com/killer-sh/cks-course-environment/tree/master/course-content/opa/deny-all

https://play.openpolicyagent.org

https://github.com/BouweCeunen/gatekeeper-policies

https://github.com/open-policy-agent/gatekeeper-library/tree/master/library/general
https://github.com/open-policy-agent/gatekeeper/tree/master/demo/basic

Example required memorylimit:

cat requiredresources-template.yaml
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
 name: k8srequiredresources
spec:
 crd:
 spec:
 names:
 kind: K8sRequiredResources
 listKind: K8sRequiredResourcesList
 plural: k8srequiredresources
 singular: k8srequiredresources
 validation:
 # Schema for the `parameters` field
 openAPIV3Schema:
 properties:
 requests\_cpu:
 type: string
 requests\_memory:
 type: string
 limits\_cpu:
 type: string
 limits\_memory:
 type: string
 targets:
 - target: admission.k8s.gatekeeper.sh
 rego: |

 package k8srequiredresources

 violation[{"msg": msg}] {
 container := input.review.object.spec.containers[\_]
 #not container.resources.limits
 not container.resources.limits.memory
 msg := sprintf("container <%v> has no memory limits", [container.name])
 }
cat resources-policy.yml
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sRequiredResources
metadata:
 name: resources-policy
spec:
 match:
 kinds:
 - apiGroups: ["batch", "extensions", "apps", ""]
 kinds: ["Deployment", "Pod", "CronJob", "Job", "StatefulSet", "DaemonSet"]

OS Level Security Domains


title: “OS Level Security Domains”
date: 2020-12-13T16:05:21
slug: os-level-security-domains


Enable PodSecurityPolicy in /etc/kubernetes/manifests/kube-apiserver.yaml (add PodSecurityPolicy)

spec:
 containers:
 - command:
 - kube-apiserver
 - --advertise-address=10.156.0.6
 - --allow-privileged=true
 - --authorization-mode=Node,RBAC
 - --client-ca-file=/etc/kubernetes/pki/ca.crt
 - --enable-admission-plugins=NodeRestriction,PodSecurityPolicy

Create a PodSecurityPolicy:

apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
 name: example
spec:
 allowPrivilegeEscalation: false
 privileged: false
 seLinux:
 rule: RunAsAny
 supplementalGroups:
 rule: RunAsAny
 runAsUser:
 rule: RunAsAny
 fsGroup:
 rule: RunAsAny
 volumes:
 - '\*'

Create a Role and assign it to the default SA:

kubectl create role psp-access --verb=use --resource=podsecuritypolicies
kubectl create rolebinding psp-access --role=psp-access --serviceaccount=default:default

Assign top all SA in Namepsace team-red
kubectl create rolebinding psp-mount --clusterrole=psp-mount --group=system:serviceaccounts -n team-red

Install containerd for Sandbox Container


title: “Install containerd for Sandbox Container”
date: 2020-12-13T15:19:14
slug: install-containerd-for-sandbox-container


Install gVisor

curl -fsSL https://gvisor.dev/archive.key | sudo apt-key add -
sudo add-apt-repository "deb https://storage.googleapis.com/gvisor/releases release main"
sudo apt-get update && sudo apt-get install -y runsc

Install containerd

wget https://github.com/containerd/containerd/releases/download/v1.4.3/containerd-1.4.3-linux-amd64.tar.gz
tar -xzvf containerd-1.4.3-linux-amd64.tar.gz
cp bin/\* /usr/local/bin
cd /
wget https://github.com/containerd/containerd/releases/download/v1.4.3/cri-containerd-cni-1.4.3-linux-amd64.tar.gz
tar -xzvf cri-containerd-cni-1.4.3-linux-amd64.tar.gz (nach / entpacken)
cp /etc/systemd/system/containerd.service /lib/systemd/system
systemctl enable containerd
mkdir /etc/containerd/
cat <<EOF | sudo tee /etc/containerd/config.toml
disabled\_plugins = ["restart"]
[plugins.linux]
shim\_debug = true
[plugins.cri.containerd.runtimes.runsc]
runtime\_type = "io.containerd.runsc.v1"
EOF
systemctl restart containerd

Install crictl

wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.13.0/crictl-v1.13.0-linux-amd64.tar.gz
tar xf crictl-v1.13.0-linux-amd64.tar.gz
sudo mv crictl /usr/local/bin
cat <<EOF | sudo tee /etc/crictl.yaml
runtime-endpoint: unix:///run/containerd/containerd.sock
EOF

Install Kubernetes

apt-get update && apt-get install -y apt-transport-https
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl
kubeadm init --pod-network-cidr=172.16.0.0/16 --service-cidr=172.17.0.0/18

Configure kubelet for containerd

cat <<EOF | sudo tee /etc/systemd/system/kubelet.service.d/0-containerd.conf
[Service]
Environment="KUBELET\_EXTRA\_ARGS=--container-runtime=remote --runtime-request-timeout=15m --container-runtime-endpoint=unix:///run/containerd/containerd.sock"
EOF
systemctl daemon-reload
systemctl restart kubelet
kubectl taint nodes --all node-role.kubernetes.io/master-

Install the RuntimeClass for gVisor:

cat <<EOF | kubectl apply -f -
apiVersion: node.k8s.io/v1beta1
kind: RuntimeClass
metadata:
name: gvisor
handler: runsc
EOF

Create a Pod with the gVisor RuntimeClass:

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: nginx-gvisor
spec:
runtimeClassName: gvisor
containers:
- name: nginx
image: nginx
EOF

Container Runtime Sandboxes


title: “Container Runtime Sandboxes”
date: 2020-12-12T13:16:53
slug: container-runtime-sandboxes


Run a container with an own Kernel/Runtime
Check on which container daemon is the Node Running:

# k get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
cks-master Ready master 4d1h v1.19.3 10.156.0.2 Ubuntu 18.04.5 LTS 5.4.0-1030-gcp docker://19.3.6
cks-worker Ready 4d1h v1.19.3 10.156.0.3 Ubuntu 18.04.5 LTS 5.4.0-1030-gcp docker://19.3.6

Install gVisor/runsc

curl -fsSL https://gvisor.dev/archive.key | sudo apt-key add -
sudo add-apt-repository "deb https://storage.googleapis.com/gvisor/releases release main"
sudo apt-get update && sudo apt-get install -y runsc

cat <<EOF > /etc/default/kubelet
KUBELET\_EXTRA\_ARGS="--container-runtime remote --container-runtime-endpoint unix:///run/containerd/containerd.sock"
EOF
systemctl daemon-reload
systemctl restart kubelet

cat <<EOF > /etc/containerd/config.toml
disabled\_plugins = ["restart"]
[plugins.linux]
 shim\_debug = true
[plugins.cri.containerd.runtimes.runsc]
 runtime\_type = "io.containerd.runsc.v1"
EOF

# crictl should use containerd as default
cat <<EOF > /etc/crictl.yaml
runtime-endpoint: unix:///run/containerd/containerd.sock
EOF
systemctl restart containerd

Check on which container daemon is the Node Running:

# k get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
cks-master Ready master 4d1h v1.19.3 10.156.0.2 Ubuntu 18.04.5 LTS 5.4.0-1030-gcp docker://19.3.6
cks-worker Ready 4d1h v1.19.3 10.156.0.3 Ubuntu 18.04.5 LTS 5.4.0-1030-gcp containerd://1.3.3

Create a RuntimeClass:

cat <<EOF | kubectl apply -f -
apiVersion: node.k8s.io/v1beta1
kind: RuntimeClass
metadata:
 name: gvisor
handler: runsc
EOF