Category Archives: Kubernetes

Automatic Storage Provision


title: “Automatic Storage Provision”
date: 2018-08-24T12:54:32
slug: automatic-storage-provision


Create a Stroage Class (will be the default one):

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
 annotations:
 storageclass.kubernetes.io/is-default-class: "true"
 name: fast-disks
provisioner: kubernetes.io/no-provisioner
reclaimPolicy: Delete
volumeBindingMode: Immediate

Create the Storage Directories on the Host

mkdir /mnt/disks
 for vol in vol1 vol2 vol3; do
 mkdir /mnt/disks/$vol
 mount -t tmpfs $vol /mnt/disks/$vol
done

Creating local persistent volumes (The created Pod will present eachStorage directory as a PV)
This script is from https://github.com/kubernetes-incubator/external-storage/tree/master/
The values can be adapted in values.yaml

helm template ./helm/provisioner > ./provisioner/deployment/kubernetes/provisioner\_generated.yaml
kubectl create -f ./provisioner/deployment/kubernetes/provisioner\_generated.yaml

The PV can be blaimed by the storage class fast-disks.
The Size of the PV must be equal or greater the the claimed size.
The PV Size can be set via the mounted tmpfs size (eg: mount -t tmpfs -o size=10G vol3 /DATA/vol3)

Local Storage example


title: “Local Storage example”
date: 2018-08-23T13:19:11
slug: local-storage-example


kind: PersistentVolume
apiVersion: v1
metadata:
 name: task-pv-volume
 labels:
 type: local
spec:
 storageClassName: fast-disks
 capacity:
 storage: 10Gi
 accessModes:
 - ReadWriteOnce
 hostPath:
 path: "/mnt/disks"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
 name: task-pv-claim
spec:
 storageClassName: fast-disks
 accessModes:
 - ReadWriteOnce
 resources:
 requests:
 storage: 3Gi
---
kind: Pod
apiVersion: v1
metadata:
 name: task-pv-pod
spec:
 volumes:
 - name: task-pv-storage
 persistentVolumeClaim:
 claimName: task-pv-claim
 containers:
 - name: task-pv-container
 image: nginx
 ports:
 - containerPort: 80
 name: "http-server"
 volumeMounts:
 - mountPath: "/usr/share/nginx/html"
 name: task-pv-storage

Install Gitlab as Chart


title: “Install Gitlab as Chart”
date: 2018-08-23T07:37:55
slug: install-gitlab-as-chart


Install Helm&Tiller with tiller SA

apiVersion: v1
kind: ServiceAccount
metadata:
 name: tiller
 namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
 name: tiller
roleRef:
 apiGroup: rbac.authorization.k8s.io
 kind: ClusterRole
 name: cluster-admin
subjects:
 - kind: ServiceAccount
 name: tiller
 namespace: kube-system
kubectl create -f rbac-config.yaml
helm init --service-account tiller
helm dependency update
helm init

Install Gitlab

helm install --name gitlab --set externalUrl=http://192.168.56.5/ stable/gitlab-ce

Install gitlab runner

helm repo add gitlab https://charts.gitlab.io
git clone https://gitlab.com/charts/gitlab-runner.git
helm
vi gitlab-runner/values.yaml
install --namespace gitlab --name gitlab-runner -f values.yaml gitlab/gitlab-runner

Mit localer values Datei

helm repo add stable https://kubernetes-charts.storage.googleapis.com/
helm install --name gitlab -f values.yaml stable/gitlab-ce

Create SA and Token for Dashboard


title: “Create SA and Token for Dashboard”
date: 2018-08-22T14:41:07
slug: create-sa-and-token-for-dashboard


Create Service Account

We are creating Service Account with name admin-user in namespace kube-system first.
Create Service Account

We are creating Service Account with name admin-user in namespace kube-system first.

apiVersion: v1
kind: ServiceAccount
metadata:
 name: admin-user
 namespace: kube-system

Create ClusterRoleBinding

In most cases after provisioning our cluster using kops or kubeadm or any other popular tool admin Role already exists in the cluster. We can use it and create only RoleBinding for our ServiceAccount.

NOTE: apiVersion of ClusterRoleBinding resource may differ between Kubernetes versions. Starting from v1.8 it was promoted to rbac.authorization.k8s.io/v1.

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
 name: admin-user
roleRef:
 apiGroup: rbac.authorization.k8s.io
 kind: ClusterRole
 name: cluster-admin
subjects:
- kind: ServiceAccount
 name: admin-user
 namespace: kube-system

Bearer Token

Now we need to find token we can use to log in. Execute following command:

kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')

Persistent Storage (NFS)


title: “Persistent Storage (NFS)”
date: 2018-05-23T09:31:15
slug: persistent-storage-nfs


`apiVersion: v1
kind: PersistentVolume
metadata:
name: froxlor-home
labels:
key: froxlor-home
spec:
capacity:
storage: 500Gi
volumeMode: Filesystem
accessModes:
– ReadWriteMany
persistentVolumeReclaimPolicy: Recycle
storageClassName: slow
mountOptions:
– hard
– nfsvers=4.1
– noexec
nfs:
path: /voln160751a1
server: 46.38.248.210


kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: claim-froxlor-home
spec:
accessModes:
– ReadWriteMany
volumeMode: Filesystem
resources:
requests:
storage: 500Gi
storageClassName: slow
selector:
matchExpressions:
– {key: key , operator: In, values: [froxlor-home]}`

Der PVC Matched anhande des selectors (froxlor-home)

create a proxy URL


title: “create a proxy URL”
date: 2018-03-23T15:30:50
slug: create-a-proxy-url


https://173.212.228.153:6443/api/v1/namespaces/gitlab-managed-apps/services/prometheus-prometheus-kube-state-metrics:80/proxy

Openvpn für Kubernetes


title: “Openvpn für Kubernetes”
date: 2018-02-27T13:40:03
slug: openvpn-fur-kubernetes


apiVersion: v1
kind: Service
metadata:
labels:
chart: openvpn-2.0.2
type: openvpn
name: openvpn
namespace: default
spec:
ports:
– name: openvpn
nodePort: 30203
port: 443
protocol: TCP
targetPort: 443
selector:
app: openvpn
sessionAffinity: None
type: LoadBalancer

apiVersion: v1
data:
configure.sh: |-
#!/bin/sh
/etc/openvpn/setup/setup-certs.sh
iptables -t nat -A POSTROUTING -s 10.240.0.0/255.255.0.0 -o eth0 -j MASQUERADE
mkdir -p /dev/net
if [ ! -c /dev/net/tun ]; then
mknod /dev/net/tun c 10 200
fi

if [ “$DEBUG” == “1” ]; then
echo ========== ${OVPN_CONFIG} ==========
cat “${OVPN_CONFIG}”
echo ====================================
fi
IP=$(ip route get 8.8.8.8 | awk ‘/8.8.8.8/ {print $NF}’)
BASEIP=echo $IP | cut -d”.” -f1-3
NETWORK=echo $BASEIP”.0″
DNS=$(cat /etc/resolv.conf | grep -v ‘^#’ | grep nameserver | awk ‘{print $2}’)
SEARCH=$(cat /etc/resolv.conf | grep -v ‘^#’ | grep search | awk ‘{$1=””; print $0}’)
cp -f /etc/openvpn/setup/openvpn.conf /etc/openvpn/
sed ‘s|OVPN_K8S_SEARCH|’”${SEARCH}”‘|’ -i /etc/openvpn/openvpn.conf
sed ‘s|OVPN_K8S_DNS|’”${DNS}”‘|’ -i /etc/openvpn/openvpn.conf
sed ‘s|NETWORK|’”${NETWORK}”‘|’ -i /etc/openvpn/openvpn.conf

openvpn –config /etc/openvpn/openvpn.conf
newClientCert.sh: |-
#!/bin/bash
EASY_RSA_LOC=”/etc/openvpn/certs”
cd $EASY_RSA_LOC
MY_IP_ADDR=”$2″
./easyrsa build-client-full $1 nopass
cat >${EASY_RSA_LOC}/pki/$1.ovpn <
cat ${EASY\_RSA\_LOC}/pki/private/$1.key

cat ${EASY\_RSA\_LOC}/pki/issued/$1.crt

cat ${EASY\_RSA\_LOC}/pki/ca.crt

cat ${EASY\_RSA\_LOC}/pki/dh.pem

remote ${MY_IP_ADDR} 443 tcp

EOF
cat pki/$1.ovpn
openvpn.conf: |-
server 10.240.0.0 255.255.0.0
verb 3
key /etc/openvpn/certs/pki/private/server.key
ca /etc/openvpn/certs/pki/ca.crt
cert /etc/openvpn/certs/pki/issued/server.crt
dh /etc/openvpn/certs/pki/dh.pem

key-direction 0
keepalive 10 60
persist-key
persist-tun

proto tcp
port 443
dev tun0
status /tmp/openvpn-status.log

user nobody
group nogroup

push “route NETWORK 255.255.240.0″

push “route 10.0.0.0 255.0.0.0″

push “dhcp-option DOMAIN OVPN_K8S_SEARCH”
push “dhcp-option DNS OVPN_K8S_DNS”
setup-certs.sh: |-
#!/bin/bash
EASY_RSA_LOC=”/etc/openvpn/certs”
SERVER_CERT=”${EASY_RSA_LOC}/pki/issued/server.crt”
if [ -e “$SERVER_CERT” ]
then
echo “found existing certs – reusing”
else
cp -R /usr/share/easy-rsa/* $EASY_RSA_LOC
cd $EASY_RSA_LOC
./easyrsa init-pki
echo “ca
” | ./easyrsa build-ca nopass
./easyrsa build-server-full server nopass
./easyrsa gen-dh
fi
kind: ConfigMap
metadata:
name: openvpn
namespace: default

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: “1”
generation: 1
labels:
chart: openvpn-2.0.2
heritage: Tiller
release: messy-coral
name: openvpn
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: openvpn
chart: openvpn-2.0.2
heritage: Tiller
release: messy-coral
type: openvpn
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: openvpn
chart: openvpn-2.0.2
heritage: Tiller
release: messy-coral
type: openvpn
spec:
containers:
– command:
– /etc/openvpn/setup/configure.sh
image: jfelten/openvpn-docker:1.1.0
imagePullPolicy: IfNotPresent
name: openvpn
ports:
– containerPort: 443
name: openvpn
protocol: TCP
resources:
limits:
cpu: 300m
memory: 128Mi
requests:
cpu: 300m
memory: 128Mi
securityContext:
capabilities:
add:
– NET_ADMIN
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
– mountPath: /etc/openvpn/setup
name: openvpn
– mountPath: /etc/openvpn/certs
name: certs
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
– configMap:
defaultMode: 509
name: openvpn
name: openvpn
– hostPath:
path: /etc/openvpn/certs
name: certs

Install helm


title: “Install helm”
date: 2018-02-27T11:46:54
slug: install-helm


tiller_role.yaml

Für jeden Namespace wiederholen:

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
 annotations:
 rbac.authorization.kubernetes.io/autoupdate: "true"
 labels:
 kubernetes.io/bootstrapping: rbac-defaults
 name: tiller
 namespace: tiller
rules:
- apiGroups:
 - '\*'
 resources:
 - '\*'
 verbs:
 - '\*'
kubectl create ns tiller
kubectl create serviceaccount --namespace tiller tiller
helm init --service-account tiller --tiller-namespace=default
kubectl create rolebinding tiller --role=tiller --namespace=tiller
kubectl create rolebinding tiller --role=tiller --namespace=default

K8S Master Isolation


title: “K8S Master Isolation”
date: 2018-02-14T14:40:53
slug: k8s-master-isolation


Master Isolation

By default, your cluster will not schedule pods on the master for security reasons. If you want to be able to schedule pods on the master, e.g. for a single-machine Kubernetes cluster for development, run:

kubectl taint nodes --all node-role.kubernetes.io/master-

K8S Debian Node – Flannel


title: “K8S Debian Node – Flannel”
date: 2018-02-08T15:45:49
slug: k8s-debian-node-flannel


curl --silent --location 'https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz' | tar -zvxf-
cp flanneld /usr/bin
mkdir -p /var/lib/k8s/flannel/networks

`cat << EOF > /lib/systemd/system/flanneld.service
[Unit]
Description=Network fabric for containers
Documentation=https://github.com/coreos/flannel
After=etcd.service

[Service]
Type=notify
Restart=always
RestartSec=5
ExecStart=/usr/bin/flanneld \
-etcd-endpoints=http://10.0.1.80:4001 \
-logtostderr=true \
-subnet-dir=/var/lib/k8s/flannel/networks \
-subnet-file=/var/lib/k8s/flannel/subnet.env
[Install]
WantedBy=multi-user.target
EOF`