Tag Archives: exported

Dashboard Config


title: “Dashboard Config”
date: 2020-12-08T21:54:10
slug: dashboard-config


Arguments: https://github.com/kubernetes/dashboard/blob/master/docs/common/dashboard-arguments.md
Access Control: https://github.com/kubernetes/dashboard/tree/master/docs/user/access-control

k edit deployments -n kubernetes-dashboard kubernetes-dashboard

network Policies


title: “network Policies”
date: 2020-12-08T20:57:13
slug: create-a-default-deny-policy


podSelector: This selects particular Pods in the same namespace as the NetworkPolicy which should be allowed as ingress sources or egress destinations.
namespaceSelector: This selects particular namespaces for which all Pods should be allowed as ingress sources or egress destinations.
namespaceSelector and podSelector: A single to/from entry that specifies both namespaceSelector and podSelector selects particular Pods within particular namespaces. Be careful to use correct YAML syntax; this policy:

Namespace AND Pod Selector

 - from:
 - namespaceSelector:
 matchLabels:
 user: alice
 podSelector:
 matchLabels:
 role: client

Namespace OR Pod Selector

 - from:
 - namespaceSelector:
 matchLabels:
 user: alice
 - podSelector:
 matchLabels:
 role: client

Create a default deny policy

cat default-deny.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
 name: default-deny
 namespace: default
spec:
 podSelector: {}
 policyTypes:
 - Ingress
 - Egress

Allow Traffic from Pod1 (label: “run: frontend”) to Pod2 (label: “run: backend”)
This Policy is needed to allow outgoing Traffic from Pod1 (only to Pods with label “run: backend”)

cat frontend.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
 name: frontend
 namespace: default
spec:
 podSelector:
 matchLabels:
 run: frontend
 policyTypes:
 - Egress
 egress:
 - to:
 - podSelector:
 matchLabels:
 run: backend

This Policy is needed to allow Incoming Traffic on Pod2 (only from Pods with label “run: frontend”)

cat backend.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
 name: backend
 namespace: default
spec:
 podSelector:
 matchLabels:
 run: backend
 policyTypes:
 - Ingress
 ingress:
 - from:
 - podSelector:
 matchLabels:
 run: frontend

Allow Connection from Pod2 (label: run: backend) to the Namespace cassandra (Namespace with Label “ns: cassandra”)
(It works as long no default deny Policy is applied to ns cassandra)

cat backend.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
 name: backend
 namespace: default
spec:
 podSelector:
 matchLabels:
 run: backend
 policyTypes:
 - Ingress
 - Egress
 ingress:
 - from:
 - podSelector:
 matchLabels:
 run: frontend
 egress:
 - to:
 - namespaceSelector:
 matchLabels:
 ns: cassandra

Allow Incoming Traffic from a Namespace with Label “id: default” to Pod with label “run: cassandra” in namespace cassandra

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
 name: cassandra
 namespace: cassandra
spec:
 podSelector:
 matchLabels:
 run: cassandra
 policyTypes:
 - Ingress
 ingress:
 - from:
 - namespaceSelector:
 matchLabels:
 ns: default

Execute Shell Command with STDIN Pipe


title: “Execute Shell Command with STDIN Pipe”
date: 2020-11-27T08:47:38
slug: execute-shell-command-with-stdin-pipe


 tasks:
 - name: Add ImageContentSourcePolicy for internal image proxy
 when: enable\_openshift\_registry\_mirror | default(true)
 shell:
 cmd: "oc apply -f -"
 stdin: |
 apiVersion: operator.openshift.io/v1alpha1
 kind: ImageContentSourcePolicy
 metadata:
 name: internal-image-mirror
 spec:
 repositoryDigestMirrors:
 - mirrors:
 - harbor.qsu.paas.pop.noris.de/quay.io
 source: quay.io
 - mirrors:
 - harbor.qsu.paas.pop.noris.de/registry.redhat.io
 source: registry.redhat.io

Fluentd YAML Files


title: “Fluentd YAML Files”
date: 2020-11-27T08:38:15
slug: fluentd-yaml-files


apiVersion: v1
kind: ServiceAccount
metadata:
 name: fluentd
 namespace: elasticsearch-azure
---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
 name: fluentd-role
 namespace: elastisearch-azure
rules:
 - apiGroups: [""]
 resources:
 - namespaces
 - pods
 - pods/logs
 verbs: ["get", "list", "watch"]
---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
 name: fluentd-role-binding
roleRef:
 apiGroup: rbac.authorization.k8s.io
 kind: ClusterRole
 name: fluentd-role
subjects:
 - kind: ServiceAccount
 name: fluentd
 namespace: elasticsearch-azure
---

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
 name: fluentd
 namespace: elasticsearch-azure
 labels:
 k8s-app: fluentd-logging
 version: v1
spec:
 template:
 metadata:
 labels:
 k8s-app: fluentd-logging
 version: v1
 spec:
 serviceAccountName: fluentd
 tolerations:
 - key: node-role.kubernetes.io/master
 effect: NoSchedule
 containers:
 - name: fluentd
 image: fluent/fluentd-kubernetes-daemonset:v1-debian-elasticsearch
 envFrom:
 - secretRef:
 name: fluent-tls
 env:
 - name: FLUENT\_ELASTICSEARCH\_HOST
 value: "{{server\_namne}}"
 - name: FLUENT\_ELASTICSEARCH\_PORT
 value: "{port}"
 - name: FLUENT\_ELASTICSEARCH\_SCHEME
 value: "https"
 # Option to configure elasticsearch plugin with self signed certs
 # ================================================================
 - name: FLUENT\_ELASTICSEARCH\_SSL\_VERIFY
 value: "true"
 # Option to configure elasticsearch plugin with tls
 # ================================================================
 - name: FLUENT\_ELASTICSEARCH\_SSL\_VERSION
 value: "TLSv1\_2"
 resources:
 limits:
 memory: 200Mi
 requests:
 cpu: 100m
 memory: 200Mi
 volumeMounts:
 - name: varlog
 mountPath: /var/log
 - name: varlibdockercontainers
 mountPath: /var/lib/docker/containers
 readOnly: true
 - name: ssl
 mountPath: /fluent-tls/ssl
 readOnly: true
 terminationGracePeriodSeconds: 30
 volumes:
 - name: varlog
 hostPath:
 path: /var/log
 - name: varlibdockercontainers
 hostPath:
 path: /var/lib/docker/containers
 # certificates folder for filebeat
 - name: ssl
 secret:
 secretName: fluent-tls
kubectl create secret generic fluent-tls \
--from-file=ca\_file=./chain.pem \
--from-file=cert\_pem=./cert.pem \
--from-file=cert\_key=./cert.key

Read Kubernetes or Openshift Audit Logs, flag it and send it to ELS


title: “Read Kubernetes or Openshift Audit Logs, flag it and send it to ELS”
date: 2020-11-27T08:35:23
slug: read-kubernetes-or-openshift-audir-logs-flag-it-and-send-it-to-els


# Openshift audit logs

 @type tail
 @id openshift-audit-input
 path /var/log/oauth-apiserver/audit.log,/var/log/openshift-apiserver/audit.log
 pos\_file /tmp/audit.log.pos
 tag openshift-audit.log

 @type json
 time\_key requestReceivedTimestamp
 keep\_time\_key true
 time\_format %Y-%m-%dT%H:%M:%S.%N%z


 @type copy

 @type elasticsearch
# @id default
 @log\_level "info"
 include\_tag\_key true
 host "opendistro"
 port 9200
 scheme https
 ssl\_verify false
 ssl\_version TLSv1\_2
 client\_cert /etc/fluent/cert/cert\_pem
 client\_key /etc/fluent/cert/cert\_key
 client\_cert\_auth true
 reload\_connections false
 reconnect\_on\_error true
 reload\_on\_failure true
 log\_es\_400\_reason false
 logstash\_prefix "audit-openshift"
 logstash\_format true
 index\_name "audit-openshift"
 type\_name "fluentd"

 flush\_thread\_count 1
 flush\_interval 5s
 chunk\_limit\_size 2M
 queue\_limit\_length 4
 retry\_max\_interval 30
 retry\_forever true

Read containerlogs, flag it and fromat it and send it to els


title: “Read containerlogs, flag it and fromat it and send it to els”
date: 2020-11-27T08:34:26
slug: read-containerlogs-flag-it-and-fromat-it-and-send-it-to-els


# Container Logs

 @type tail
 path /var/log/containers/\*.log
 exclude\_path ["/var/log/containers/fluentd-\*\_openshift-logging\_\*.log"]
 pos\_file /tmp/containers.log.pos
 refresh\_interval 5
 rotate\_wait 5
 tag kubernetes.\*
 format json
 read\_from\_head true

 @type multi\_format

 format json
 time\_format '%Y-%m-%dT%H:%M:%S.%N%Z'
 keep\_time\_key true


 format regexp
 expression /^(?.+) (?stdout|stderr)( (?.))? (?.\*)$/
 time\_format '%Y-%m-%dT%H:%M:%S.%N%:z'
 keep\_time\_key true



 type kubernetes\_metadata

 @type copy

 @type elasticsearch
 @type elasticsearch\_dynamic
 # @id default
 @log\_level "info"
 include\_tag\_key true
 host "opendistro"
 port 9200
 scheme https
 ssl\_verify false
 ssl\_version TLSv1\_2
 client\_cert /etc/fluent/cert/cert\_pem
 client\_key /etc/fluent/cert/cert\_key
 client\_cert\_auth true
 reload\_connections false
 reconnect\_on\_error true
 reload\_on\_failure true
 log\_es\_400\_reason false
 #ogstash\_prefix "containers"
 logstash\_prefix logstash-${record['kubernetes']['namespace\_name']}
 logstash\_format true
 index\_name "containers"
 type\_name "fluentd"

 flush\_thread\_count 1
 flush\_interval 5s
 chunk\_limit\_size 2M
 queue\_limit\_length 4
 retry\_max\_interval 30
 retry\_forever true

Indexname from Namespace Name


title: “Indexname from Namespace Name”
date: 2020-11-27T08:33:22
slug: inexname-from-namespace-name


@type copy
@type elasticsearch
@type elasticsearch\_dynamic
# @id default
@log\_level "info"
include\_tag\_key true
host "opendistro"
port 9200
scheme https
ssl\_verify false
ssl\_version TLSv1\_2
client\_cert /etc/fluent/cert/cert\_pem
client\_key /etc/fluent/cert/cert\_key
client\_cert\_auth true
reload\_connections false
reconnect\_on\_error true
reload\_on\_failure true
log\_es\_400\_reason false
#ogstash\_prefix "containers"
logstash\_prefix logstash-${record['kubernetes']['namespace\_name']}
logstash\_format true
index\_name "containers"
type\_name "fluentd"
flush\_thread\_count 1
flush\_interval 5s
chunk\_limit\_size 2M
queue\_limit\_length 4
retry\_max\_interval 30
retry\_forever true

Params from ENV


title: “Params from ENV”
date: 2020-11-27T08:32:24
slug: params-from-env


 @type elasticsearch\_dynamic
 #user "#{ENV['FLUENT\_ELASTICSEARCH\_USER']}"
 #password "#{ENV['FLUENT\_ELASTICSEARCH\_PASSWORD']}"
 @log\_level info
 include\_tag\_key true
 host "#{ENV['OUTPUT\_HOST']}"
 port "#{ENV['OUTPUT\_PORT']}"
 logstash\_format true
 logstash\_prefix logstash-${record['kubernetes']['namespace\_name']}

 @type file
 path /var/log/fluentd-buffers/kubernetes.system.buffer
 flush\_mode interval
 retry\_type exponential\_backoff
 flush\_thread\_count 2
 flush\_interval 5s
 retry\_forever
 retry\_max\_interval 30
 chunk\_limit\_size "#{ENV['OUTPUT\_BUFFER\_CHUNK\_LIMIT']}"
 queue\_limit\_length "#{ENV['OUTPUT\_BUFFER\_QUEUE\_LIMIT']}"
 overflow\_action block

Generate Cert with alternative Names


title: “Generate Cert with alternative Names”
date: 2020-11-25T16:08:29
slug: 1340-2


server_rootCA.csr.cnf

[req]
default\_bits = 2048
prompt = no
default\_md = sha256
distinguished\_name = dn
[dn]
C=DE
ST=Bayern
L=Muenchen
O=Strasse
OU=RootCA
emailAddress=thomas.asanger@noris.de
CN = elasticsearch-master-headless

v3.ext

authorityKeyIdentifier=keyid,issuer
basicConstraints=CA:FALSE
keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment
subjectAltName = @alt\_names
[alt\_names]
DNS.1 = elasticsearch
DNS.2 = elasticsearch.openshift-logging.svc
DNS.3 = elasticsearch.openshift-logging.svc.cluster.local
DNS.4 = elasticsearch-master.openshift-logging.svc
DNS.5 = elasticsearch-master.openshift-logging.svc.cluster.local
IP.1 = 127.0.0.1
openssl req -new -sha256 -nodes -out elastic.csr -newkey rsa:2048 -keyout elastic.key -config <( cat server\_rootCA.csr.cnf )
openssl x509 -req -in elastic.csr -CA tls.crt -CAkey tls.key -CAcreateserial -out elastic.crt -days 3650 -sha256 -extfile v3.ext
openssl x509 -in elastic.crt -text -noout