Tag Archives: exported

30-elasticsearch-output.conf


title: “30-elasticsearch-output.conf”
date: 2018-10-09T09:28:27
slug: 30-elasticsearch-output-conf


output {
 elasticsearch {
 hosts => ["https://eb843037.qb0x.com:32563/"]
 user => "ec18487808b6908009d3"
 password => "efcec6a1e0"
 index => "apache-%{+YYYY.MM.dd}"
 document\_type => "apache\_logs"
 }
 stdout { codec => rubydebug }
}

02-apache-input.conf


title: “02-apache-input.conf”
date: 2018-10-09T09:27:32
slug: 02-apache-input-conf


input {
 file {
 path => ["/var/log/apache2/access.log"]
 type => "apache\_access"
 }
 file {
 path => ["/var/log/apache2/error.log"]
 type => "apache\_error"
 }
}

10-apache-filter.conf


title: “10-apache-filter.conf”
date: 2018-10-09T09:25:49
slug: 10-apache-filter-conf


filter {
 if [type] in [ "apache" , "apache\_access" , "apache-access" ] {
 grok {
 match => [
 "message" , "%{COMBINEDAPACHELOG}+%{GREEDYDATA:extra\_fields}",
 "message" , "%{COMMONAPACHELOG}+%{GREEDYDATA:extra\_fields}"
 ]
 overwrite => [ "message" ]
 }
 mutate {
 convert => ["response", "integer"]
 convert => ["bytes", "integer"]
 convert => ["responsetime", "float"]
 }
 geoip {
 source => "clientip"
 target => "geoip"
 add\_tag => [ "apache-geoip" ]
 }
 date {
 match => [ "timestamp" , "dd/MMM/YYYY:HH:mm:ss Z" ]
 remove\_field => [ "timestamp" ]
 }
 useragent {
 source => "agent"
 }
 }
 if [type] in ["apache\_error","apache-error"] {
 grok {
 match => ["message", "\[%{WORD:dayname} %{WORD:month} %{DATA:day} %{DATA:hour}:%{DATA:minute}:%{DATA:second} %{YEAR:year}\] \[%{NOTSPACE:loglevel}\] (?:\[client %{IPORHOST:clientip}\] ){0,1}%{GREEDYDATA:message}"]
 overwrite => [ "message" ]
 }
 mutate
 {
 add\_field =>
 {
 "time\_stamp" => "%{day}/%{month}/%{year}:%{hour}:%{minute}:%{second}"
 }
 }
 date {
 match => ["time\_stamp", "dd/MMM/YYYY:HH:mm:ss"]
 remove\_field => [ "time\_stamp","day","dayname","month","hour","minute","second","year"]
 }
 }
}

NFS uid/gid mapping


title: “NFS uid/gid mapping”
date: 2018-10-02T09:27:17
slug: nfs-uidgid-mapping


UPDATE: The following post is referring to the user-mode NFS server that some linux distributions had when I wrote the post back in 2007. Now (2013), most distro’s just use the kernel based NFS server, which does not include the uid/gid remapping as far as I am aware.

My debian etch box is a file server amongst other things and generally I
use NFS to mount its directories on other linux boxes, and as per an
earlier post I also mount these directories on the MacMini.

Generally access is read only, but I noticed my write access didn’t work
at all. I kept on getting permission denied errors. Of course, it was
because my uids and gids did not match up between client and server. Now the linux user mode NFS server (which is what I run) has a uid/gid remapping facility. I first
tried something like this in /etc/exports:

/somedir 10.1.2.0/255.255.255.0 (rw,insecure,map_static=/etc/nfs.map)

And set up my /etc/nfs.map file as :

remote local

gid 500 1000
uid 500 2003

So that means that if the client is uid 500, that it gets remapped to
uid 2003 on the server. And gid 500 on the client gets mapped to 1000 on
the server.

I tried it and it didn’t work.

Then I read that if you use subnet matching then some stuff doesn’t
work, so attempt two using the explicit IP of one of my clients:

/somedir 10.1.2.1(rw,insecure,map_static=/etc/nfs.map)

Stopped and started the NFS server and mounted on the client (linux at
this stage) and it all worked.

Then I added some entries into the map for the MacMini.And had my
/etc/exports as:

/somedir 10.1.2.1 (rw,insecure,map_static=/etc/nfs.map)
10.1.2.2(rw,insecure,map_static=/etc/nfs.map)

and my new /etc/nfs.map looked like:

remote local

gid 500 1000 # linux client
uid 500 2003 # linux client
gid 501 1000 # Mac client
uid 501 2003 # Mac client

That didn’t work. Well it worked on one of the clients, but not the
other. I think the mapping clashed, so I ended up having separate maps
for each client:

/somedir 10.1.2.1 (rw,insecure,map_static=/etc/nfs.map.linux)
10.1.2.2(rw,insecure,map_static=/etc/nfs.map.mac)

And split that nfs.map file appropriately.

Now it all worked.

Netplan


title: “Netplan”
date: 2018-09-05T15:09:03
slug: netplan


/etc/netplan/01-netcfg.yaml:

network:
version: 2
renderer: networkd
ethernets:
eth0:
addresses:
- 192.168.178.110/24
nameservers:
addresses: [192.168.178.1]
routes:
- to: default
via: 192.168.178.1
eth1:
dhcp4: no
dhcp6: no
addresses: [192.168.56.30/24]

eth0


title: “eth0”
date: 2018-09-05T14:55:51
slug: eth0


sed -i -e 's/GRUB\_CMDLINE\_LINUX=.\*/GRUB\_CMDLINE\_LINUX="net.ifnames=0 biosdevname=0"/' /etc/default/grub
grub-mkconfig -o /boot/grub/grub.cfg
sed -i -e 's/enp0s3/eth0/' /etc/network/interfaces

Ingress with TLS


title: “Ingress with TLS”
date: 2018-08-29T15:43:12
slug: ingress-with-tls


helm install stable/kube-lego --namespace kube-system --set config.LEGO\_EMAIL=YOUR\_EMAIL,config.LEGO\_URL=https://acme-v01.api.letsencrypt.org/directory
helm install stable/kube-lego --name kube-lego --namespace kube-system --set config.LEGO\_EMAIL=ta@ta.vg,config.LEGO\_URL=https://acme-v01.api.letsencrypt.org/directory,rbac.create=true
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
 name: joomla-ingress
 annotations:
 kubernetes.io/ingress.class: nginx
 kubernetes.io/tls-acme: 'true'
spec:
 rules:
 - host: YOUR\_DOMAIN
 http:
 paths:
 - path: /
 backend:
 serviceName: ingress-example-joomla
 servicePort: 80
 tls:
 - secretName: joomla-tls-cert
 hosts:
 - YOUR\_DOMAIN

Elasticsearch & Kibana Helm Chart


title: “Elasticsearch & Kibana Helm Chart”
date: 2018-08-27T13:25:41
slug: elasticsearch-helm-chart


Prerequisite: Local Storage Provisioner

helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator
helm install --name elastic-search incubator/elasticsearch
oder mit eigenen values:
git clone https://github.com/helm/charts.git
vi charts/incubator/elasticsearch/values.yaml
helm install --name elastic-search -f charts/incubator/elasticsearch/values.yaml incubator/elasticsearch
git clone https://github.com/helm/charts.git
vi charts/stable/kibana/values.yaml
helm install stable/kibana --name kibana -f charts/stable/kibana/values.yaml