Tag Archives: exported

Automatic Storage Provision


title: “Automatic Storage Provision”
date: 2018-08-24T12:54:32
slug: automatic-storage-provision


Create a Stroage Class (will be the default one):

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
 annotations:
 storageclass.kubernetes.io/is-default-class: "true"
 name: fast-disks
provisioner: kubernetes.io/no-provisioner
reclaimPolicy: Delete
volumeBindingMode: Immediate

Create the Storage Directories on the Host

mkdir /mnt/disks
 for vol in vol1 vol2 vol3; do
 mkdir /mnt/disks/$vol
 mount -t tmpfs $vol /mnt/disks/$vol
done

Creating local persistent volumes (The created Pod will present eachStorage directory as a PV)
This script is from https://github.com/kubernetes-incubator/external-storage/tree/master/
The values can be adapted in values.yaml

helm template ./helm/provisioner > ./provisioner/deployment/kubernetes/provisioner\_generated.yaml
kubectl create -f ./provisioner/deployment/kubernetes/provisioner\_generated.yaml

The PV can be blaimed by the storage class fast-disks.
The Size of the PV must be equal or greater the the claimed size.
The PV Size can be set via the mounted tmpfs size (eg: mount -t tmpfs -o size=10G vol3 /DATA/vol3)

Local Storage example


title: “Local Storage example”
date: 2018-08-23T13:19:11
slug: local-storage-example


kind: PersistentVolume
apiVersion: v1
metadata:
 name: task-pv-volume
 labels:
 type: local
spec:
 storageClassName: fast-disks
 capacity:
 storage: 10Gi
 accessModes:
 - ReadWriteOnce
 hostPath:
 path: "/mnt/disks"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
 name: task-pv-claim
spec:
 storageClassName: fast-disks
 accessModes:
 - ReadWriteOnce
 resources:
 requests:
 storage: 3Gi
---
kind: Pod
apiVersion: v1
metadata:
 name: task-pv-pod
spec:
 volumes:
 - name: task-pv-storage
 persistentVolumeClaim:
 claimName: task-pv-claim
 containers:
 - name: task-pv-container
 image: nginx
 ports:
 - containerPort: 80
 name: "http-server"
 volumeMounts:
 - mountPath: "/usr/share/nginx/html"
 name: task-pv-storage

Install Gitlab as Chart


title: “Install Gitlab as Chart”
date: 2018-08-23T07:37:55
slug: install-gitlab-as-chart


Install Helm&Tiller with tiller SA

apiVersion: v1
kind: ServiceAccount
metadata:
 name: tiller
 namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
 name: tiller
roleRef:
 apiGroup: rbac.authorization.k8s.io
 kind: ClusterRole
 name: cluster-admin
subjects:
 - kind: ServiceAccount
 name: tiller
 namespace: kube-system
kubectl create -f rbac-config.yaml
helm init --service-account tiller
helm dependency update
helm init

Install Gitlab

helm install --name gitlab --set externalUrl=http://192.168.56.5/ stable/gitlab-ce

Install gitlab runner

helm repo add gitlab https://charts.gitlab.io
git clone https://gitlab.com/charts/gitlab-runner.git
helm
vi gitlab-runner/values.yaml
install --namespace gitlab --name gitlab-runner -f values.yaml gitlab/gitlab-runner

Mit localer values Datei

helm repo add stable https://kubernetes-charts.storage.googleapis.com/
helm install --name gitlab -f values.yaml stable/gitlab-ce

Create SA and Token for Dashboard


title: “Create SA and Token for Dashboard”
date: 2018-08-22T14:41:07
slug: create-sa-and-token-for-dashboard


Create Service Account

We are creating Service Account with name admin-user in namespace kube-system first.
Create Service Account

We are creating Service Account with name admin-user in namespace kube-system first.

apiVersion: v1
kind: ServiceAccount
metadata:
 name: admin-user
 namespace: kube-system

Create ClusterRoleBinding

In most cases after provisioning our cluster using kops or kubeadm or any other popular tool admin Role already exists in the cluster. We can use it and create only RoleBinding for our ServiceAccount.

NOTE: apiVersion of ClusterRoleBinding resource may differ between Kubernetes versions. Starting from v1.8 it was promoted to rbac.authorization.k8s.io/v1.

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
 name: admin-user
roleRef:
 apiGroup: rbac.authorization.k8s.io
 kind: ClusterRole
 name: cluster-admin
subjects:
- kind: ServiceAccount
 name: admin-user
 namespace: kube-system

Bearer Token

Now we need to find token we can use to log in. Execute following command:

kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')

nfs services


title: “nfs services”
date: 2018-08-10T12:11:23
slug: nfs-services


root@local:/etc# rpcinfo -p
program vers proto port service
100000 4 tcp 111 portmapper
100000 3 tcp 111 portmapper
100000 2 tcp 111 portmapper
100000 4 udp 111 portmapper
100000 3 udp 111 portmapper
100000 2 udp 111 portmapper
100005 1 udp 41088 mountd
100005 1 tcp 38889 mountd
100005 2 udp 52945 mountd
100005 2 tcp 35603 mountd
100005 3 udp 42191 mountd
100005 3 tcp 43749 mountd
100003 3 tcp 2049 nfs
100003 4 tcp 2049 nfs
100227 3 tcp 2049
100003 3 udp 2049 nfs
100003 4 udp 2049 nfs
100227 3 udp 2049
100021 1 udp 45040 nlockmgr
100021 3 udp 45040 nlockmgr
100021 4 udp 45040 nlockmgr
100021 1 tcp 39425 nlockmgr
100021 3 tcp 39425 nlockmgr
100021 4 tcp 39425 nlockmgr

Persistent Storage (NFS)


title: “Persistent Storage (NFS)”
date: 2018-05-23T09:31:15
slug: persistent-storage-nfs


`apiVersion: v1
kind: PersistentVolume
metadata:
name: froxlor-home
labels:
key: froxlor-home
spec:
capacity:
storage: 500Gi
volumeMode: Filesystem
accessModes:
– ReadWriteMany
persistentVolumeReclaimPolicy: Recycle
storageClassName: slow
mountOptions:
– hard
– nfsvers=4.1
– noexec
nfs:
path: /voln160751a1
server: 46.38.248.210


kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: claim-froxlor-home
spec:
accessModes:
– ReadWriteMany
volumeMode: Filesystem
resources:
requests:
storage: 500Gi
storageClassName: slow
selector:
matchExpressions:
– {key: key , operator: In, values: [froxlor-home]}`

Der PVC Matched anhande des selectors (froxlor-home)

User Authentification


title: “User Authentification”
date: 2018-05-15T07:58:26
slug: user-authentification


use mqtt
db.createUser( { user: "emqtt", pwd: "emqtt", roles: [ "readWrite" ] } )

db.mqtt\_msg.find();

use admin
db.addUser( { user: "xforze",
pwd: "Pass",
roles: [ "userAdminAnyDatabase" ] } )

use domains
db.addUser( { user: “webscrape”,
pwd: “Pass”,
roles: [ “readWrite”, “dbAdmin” ]
} )

In MGO:

const (
MongoDBHosts = "173.212.228.153"
AuthDatabase = "domains"
AuthUserName = "webscrape"
AuthPassword = "Pass"
)

mongoDBDialInfo := &mgo.DialInfo{
Addrs: []string{MongoDBHosts},
Timeout: 60 * time.Second,
Database: AuthDatabase,
Username: AuthUserName,
Password: AuthPassword,
}

session, err := mgo.DialWithInfo(mongoDBDialInfo)

tcpdump


title: “tcpdump”
date: 2018-05-11T13:42:39
slug: tcpdump


sudo tcpdump -i any -A -s 0 host 10.76.1.65 -n

Use the option -U in combination with -w so that tcpdump writes packets immediately.