title: “Set Root ‘User PAssword MySQL 5.7”
date: 2018-11-20T09:01:35
slug: set-root-user-password-mysql-5-7
ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql\_native\_password BY 'test';
title: “Set Root ‘User PAssword MySQL 5.7”
date: 2018-11-20T09:01:35
slug: set-root-user-password-mysql-5-7
ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql\_native\_password BY 'test';
title: “LVM Verschlüsselte Partition kopieren”
date: 2018-10-15T09:07:14
slug: lvm-verschlusselte-partition-kopieren
Verschlüsselte Partition öffnen und neue Partition anlegen
sudo cryptsetup luksOpen /dev/sdc5 crypt
sudo vgcreate crypt-lvm /dev/mapper/crypt
sudo lvcreate -l100%FREE -nroot crypt-lvm
9 December, 2016
So I’m planning on making a series, “mount: unknown file-system type $TYPE”. I already have how to mount a ntfs partition, also how to mount a nfs on proxmox, now to be continued by another fun file-system. I was going through old disks, so I came across one that had LVM2_member.
root@svennd:~# mount /dev/sdd2 /mnt/disk
mount: unknown filesystem type ‘LVM2_member’
The fdisk -l already told me its a LVM :
root@svennd:~# fdisk -l /dev/sdd
Disk /dev/sdd: 233.8 GiB, 251000193024 bytes, 490234752 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x0009345d
Device Boot Start End Sectors Size Id Type
/dev/sdd1 * 63 208844 208782 102M 83 Linux
/dev/sdd2 208845 488247479 488038635 232.7G 8e Linux LVM
*(/dev/sdi1 is /boot partition, /dev/sdi2 is where the /home data resides)
*
Seems lvm2 tools also provide a way to check if its lvm or not, using lvmdiskscan(/dev/sdd2 here)
root@svennd:~# lvmdiskscan
/dev/sdb1 [ 1.82 TiB]
/dev/sdc2 [ 149.04 GiB]
/dev/sdd1 [ 101.94 MiB]
/dev/sdd2 [ 232.71 GiB] LVM physical volume
0 disks
4 partitions
0 LVM physical volume whole disks
1 LVM physical volume
Fine, now lets scan what lv’s are to be found using lvscan
root@svennd:~# lvscan
inactive ‘/dev/VolGroup00/LogVol00’ [230.75 GiB] inherit
inactive ‘/dev/VolGroup00/LogVol01’ [1.94 GiB] inherit
Since this is a old disk in an enclosure, its not activated on system boot. So we need to “activate” this lvm volume.
root@svennd:~# vgchange -ay
2 logical volume(s) in volume group “VolGroup00” now active
and bam, ready to mount :
root@svennd:~# lvscan
ACTIVE ‘/dev/VolGroup00/LogVol00’ [230.75 GiB] inherit
ACTIVE ‘/dev/VolGroup00/LogVol01’ [1.94 GiB] inherit
now to mount :
succes !
title: “ssh gateway”
date: 2018-04-17T09:19:35
slug: ssh-gateway
cat .ssh/config
Host muc-deploy
ForwardAgent yes
Host *.ampua.server.lan
ControlMaster auto
ControlPath ~/.ssh/connections/%r_%h_%p
ControlPersist yes
ProxyCommand none
PasswordAuthentication Yes
Host github0*.server.lan github.com git.mamdev.server.lan puppet-repo*
User git
ProxyCommand none
Host ndcli.server.lan 46.165.253.99 35.156.56.25 46.165.253.133 dev.asanger.biz hsp-gmx-pre01.server.lan *cinetic.de logservice* unitix-repo01* reposrv-deb* lxjumper* osum-home-master *.united.domain
User tasanger
ProxyCommand none
Host *
User juxxpd
ProxyCommand /usr/bin/ssh ampua-bs-sshgw.ampua.server.lan %h
title: “Glusterfs single server”
date: 2018-03-23T13:54:23
slug: glusterfs-single-server
mkdir /data
apt-get install glusterfs-server
gluster volume create k8s_prometheus vmd25840.contaboserver.net:/data/k8s_prometheus force
gluster volume start k8s_prometheus
title: “Maus im VI abschalten”
date: 2018-02-09T10:23:08
slug: maus-im-vi-abschalten
echo "set mouse-=a" > ~/.vimrc
title: “Squid Proxy chaching für gif|png|jpeg|jpg|bmp|tif|tiff|ico”
date: 2017-08-09T19:20:51
slug: squid-proxy-chaching-fur-gifpngjpegjpgbmptiftiffico
http_port 127.0.0.1:8080 accel defaultsite=127.0.0.1
cache_peer 127.0.0.1 parent 8081 0 no-query originserver
acl our_sites dstdomain 127.0.0.1
http_access allow our_sites
cache_effective_user proxy
cache_effective_group proxy
cache_dir ufs /var/spool/squid3 100 16 256
cache_mem 60 GB
maximum_object_size_in_memory 512 KB
refresh_pattern -i .(gif|png|jpeg|jpg|bmp|tif|tiff|ico)$ 10080 50% 43200 override-expire override-lastmod reload-into-ims ignore-reload ignore-no-cache ignore-auth ignore-private
title: “UTF-8 Terminal charset”
date: 2017-01-25T19:47:41
slug: utf-8-terminal-charset
dpkg-reconfigure locales
vi ~/.bashrc
export LC\_ALL=en\_US.UTF-8
export LANG=en\_US.UTF-8
export LANGUAGE=en\_US.UTF-8
Sofort übernehmen
source ~/.bashrc
title: “2. Install Alert Manager”
date: 2016-10-07T13:30:07
slug: 2-install-alert-manager
wget https://github.com/prometheus/alertmanager/releases/download/v0.4.2/alertmanager-0.4.2.linux-amd64.tar.gz
tar -xzf alertmanager-0.4.2.linux-amd64.tar.gz
cd alertmanager-0.4.2.linux-amd64
vi simple.yml
global:
# The smarthost and SMTP sender used for mail notifications.
smtp\_smarthost: 'www.hostingvalley.de:25'
smtp\_from: 'asanger@it-asanger.de'
smtp\_auth\_username: 'asanger@it-asanger.de'
smtp\_auth\_password: 'xxxxx'
# The auth token for Hipchat.
hipchat\_auth\_token: '1234556789'
# Alternative host for Hipchat.
hipchat\_url: 'https://hipchat.foobar.org/'
Start AlertManager:
./alertmanager -config.file=simple.yml
title: “2. Prometheus installieren”
date: 2016-10-07T13:00:36
slug: 2-prometheus-installieren
wget https://github.com/prometheus/prometheus/releases/download/v1.1.3/prometheus-1.1.3.linux-amd64.tar.gz
tar -xzf prometheus-1.1.3.linux-amd64.tar.gz
cd prometheus-1.1.3.linux-amd64
vi prometheus.yml
global:
scrape\_interval: 15s # By default, scrape targets every 15 seconds.
evaluation\_interval: 15s # Evaluate rules every 15 seconds.
# Attach these extra labels to all timeseries collected by this Prometheus instance.
external\_labels:
monitor: 'codelab-monitor'
rule\_files:
- 'prometheus.rules'
scrape\_configs:
- job\_name: 'prometheus'
# Override the global default and scrape targets from this job every 5 seconds.
scrape\_interval: 5s
static\_configs:
- targets: ['localhost:9090']
- job\_name: 'node'
# Override the global default and scrape targets from this job every 5 seconds.
scrape\_interval: 1s
static\_configs:
- targets: ['127.0.0.1:9100']
labels:
group: 'localhost'
Diese Konfiguration pollt jede Sekunde die Metrics vom localhost port 9100
Einen Job anlegen (rules Datei):
vi prometheus.rules
job\_service:bandwidth1 = rate(node\_network\_receive\_bytes{device="eth0"}[2s])
Einen Alert anlegen (rules Datei):
# Bandwidth Alert > 1s.
ALERT BandwidthHighAlert
IF rate(node\_network\_receive\_bytes{device="eth0"}[2s])/1024/1024 > 1
FOR 10s
ANNOTATIONS {
summary = "high Network Traffic on {{ $labels.instance }}",
description = "{{ $labels.instance }} has high Network Traffic (current value: {{ $value }}s)",
}
Start Prometheus with AlertManager at localhost Port 9093
./prometheus -config.file=prometheus.yml -alertmanager.url=http://localhost:9093
title: “1. Node Exporter für Metrics installieren”
date: 2016-10-07T12:51:51
slug: node-exporter-fur-metrics-installieren
wget https://github.com/prometheus/node\_exporter/releases/download/0.12.0/node\_exporter-0.12.0.linux-amd64.tar.gz
https://github.com/prometheus/node\_exporter/releases/download/0.12.0/node\_exporter-0.12.0.linux-amd64.tar.gz
cd node\_exporter-0.12.0.linux-amd64
./node\_exporter
Die letzte Zeile der Ausgabe sollte: “INFO[0000] Listening on :9100 source=node_exporter.go:176” sein.
Die Metrics sollten hier ausgegeben werden:
http://127.0.0.1:9100/metrics