Author Archives: admin

aws cloudwatch get-metric-statistics


title: “aws cloudwatch get-metric-statistics”
date: 2023-05-09T11:31:10
slug: aws-cloudwatch-get-metric-statistics


aws cloudwatch get-metric-statistics --namespace AWS/RDS --metric-name ACUUtilization --start-time 2023-05-09T00:00:00Z --end-time 2023-05-09T12:00:00Z --period 3600  --statistics Maximum

Mit Dimensions:

ws cloudwatch get-metric-statistics --namespace AWS/RDS --metric-name ACUUtilization --start-time 2023-05-09T00:00:00Z --end-time 2023-05-09T12:00:00Z --period 36
00  --statistics Average --dimensions Name=DBClusterIdentifier,Value=podcast-serverless-db-dev

Generate a CLI skeleton:

aws cloudwatch get-metric-statistics --generate-cli-skeleton

additional scrape config prometheus operator


title: “additional scrape config prometheus operator”
date: 2023-05-04T12:20:22
slug: additional-scrape-config-prometheus-operator


Additional Scrape Configuration

AdditionalScrapeConfigs allows specifying a key of a Secret containing additional Prometheus scrape configurations. Scrape configurations specified are appended to the configurations generated by the Prometheus Operator.

Job configurations specified must have the form as specified in the official Prometheus documentation. As scrape configs are appended, the user is responsible to make sure it is valid. Note that using this feature may expose the possibility to break upgrades of Prometheus.

It is advised to review Prometheus release notes to ensure that no incompatible scrape configs are going to break Prometheus after the upgrade.

Creating an additional configuration

First, you will need to create the additional configuration. Below we are making a simple “prometheus” config. Name this prometheus-additional.yaml or something similar.

- job\_name: "prometheus"
static\_configs:
- targets: ["localhost:9090"]

Then you will need to make a secret out of this configuration.

kubectl create secret generic additional-scrape-configs --from-file=prometheus-additional.yaml --dry-run=client -oyaml > additional-scrape-configs.yaml

Next, apply the generated kubernetes manifest

kubectl apply -f additional-scrape-configs.yaml -n monitoring

Finally, reference this additional configuration in your prometheus.yaml CRD.

apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
name: prometheus
labels:
prometheus: prometheus
spec:
replicas: 2
serviceAccountName: prometheus
serviceMonitorSelector:
matchLabels:
team: frontend
additionalScrapeConfigs:
name: additional-scrape-configs
key: prometheus-additional.yaml

Datei verschlüsseln / entschlüsseln


title: “Datei verschlüsseln / entschlüsseln”
date: 2023-03-14T14:18:46
slug: datei-verschlusseln


openssl enc -aes-128-cbc -e -in ${TARGET}.zip -out ${TARGET}.zip.enc -K ${DEK\_PLAIN:0:32} -iv 0

DEK=$(aws kms decrypt –ciphertext-blob fileb://key.enc –output text –query Plaintext | base64 -d | xxd -p)
ODER
DEK=”03f4632345ff274b21c97879aadc5a3c”

openssl enc -aes-128-cbc -d -K ${DEK:0:32} -iv 0 -in ${TARGET}.zip.enc -out ${TARGET}.zip

Squash Commits


title: “Squash Commits”
date: 2023-02-21T15:18:44
slug: squash-commits


Letzte 3 Commits squashem

git rebase -i HEAD~3

Erster Commit pick
bei den nächsten pick -> squash

git push -f

Create subjects / content with for_each


title: “Create subjects / content with for_each”
date: 2023-02-15T09:55:28
slug: create-subjects-content-with-for_each


dynamic "subject" {
for\_each = ["r5s:test1:admin", "r5s:test2:admin"]
content{
kind = "Group"
name = subject.value
api\_group = "rbac.authorization.k8s.io"
}
}

PZEM017 Modbus


title: “PZEM017 Modbus”
date: 2022-12-16T22:50:19
slug: pzem017-modbus


Read High Voltage Alarm threshod:

sudo mbpoll -a 5 -t 4:hex -r 1 -b 9600 -P none -s 2 -o 2 /dev/ttyUSB2 -1 -v

Set High Voltage Alarm threshod (20000 = 200.00 V):

sudo mbpoll -a 5 -t 4:hex -r 1 -b 9600 -P none -s 2 -o 2 /dev/ttyUSB2 -1 -v 0x4e20

Read Low Voltage Alarm threshod:

sudo mbpoll -a 5 -t 4:hex -r 2 -b 9600 -P none -s 2 -o 2 /dev/ttyUSB2 -1 -v

Set Low Voltage Alarm threshod (700 = 7.00 V):

sudo mbpoll -a 5 -t 4:hex -r 2 -b 9600 -P none -s 2 -o 2 /dev/ttyUSB2 -1 -v 0x02BC

Read Modbus Adress:

sudo mbpoll -a 5 -t 4:hex -r 3 -b 9600 -P none -s 2 -o 2 /dev/ttyUSB2 -1 -v

Set Modbus Address to 5:

sudo mbpoll -a 5 -t 4:hex -r 3 -b 9600 -P none -s 2 -o 2 /dev/ttyUSB2 -1 -v 0x0005

Read Shunt:

sudo mbpoll -a 5 -t 4:hex -r 4 -b 9600 -P none -s 2 -o 2 /dev/ttyUSB2 -1 -v

Set correct Shunt (300A):

sudo mbpoll -a 5 -t 4:hex -r 4 -b 9600 -P none -s 2 -o 2 /dev/ttyUSB2 -1 -v 0x0003
100A = 0x0000
50A = 0x0001
200A = 0x0002
300A = 0x0003
Beispiel (Lesen):
$ sudo mbpoll -a 5 -t 4:hex -r 4 -b 9600 -P none -s 2 -o 2 /dev/ttyUSB2 -1 -v
debug enabled
Set mode to RTU for serial port
Set device=/dev/ttyUSB2
mbpoll 1.4-12 - FieldTalk(tm) Modbus(R) Master Simulator
Copyright © 2015-2019 Pascal JEAN, https://github.com/epsilonrt/mbpoll
This program comes with ABSOLUTELY NO WARRANTY.
This is free software, and you are welcome to redistribute it
under certain conditions; type 'mbpoll -w' for details.
Opening /dev/ttyUSB2 at 9600 bauds (N, 8, 2)
Set response timeout to 2 sec, 0 us
Protocol configuration: Modbus RTU
Slave configuration...: address = [5]
start reference = 4, count = 1
Communication.........: /dev/ttyUSB2, 9600-8N2
t/o 2.00 s, poll rate 1000 ms
Data type.............: 16-bit register, output (holding) register table
-- Polling slave 5...
[05][03][00][03][00][01][75][8E]
Waiting for a confirmation...
<05><03><02><00><03><09><85>
[4]: 0x0003
Beispiel (Schreiben):
$ sudo mbpoll -a 5 -t 4:hex -r 4 -b 9600 -P none -s 2 -o 2 /dev/ttyUSB2 -1 -v 0x0003
debug enabled
Set mode to RTU for serial port
Set device=/dev/ttyUSB2
1 write data have been found
Set data=3
Word[0]=0x3
mbpoll 1.4-12 - FieldTalk(tm) Modbus(R) Master Simulator
Copyright © 2015-2019 Pascal JEAN, https://github.com/epsilonrt/mbpoll
This program comes with ABSOLUTELY NO WARRANTY.
This is free software, and you are welcome to redistribute it
under certain conditions; type 'mbpoll -w' for details.
Opening /dev/ttyUSB2 at 9600 bauds (N, 8, 2)
Set response timeout to 2 sec, 0 us
Protocol configuration: Modbus RTU
Slave configuration...: address = [5]
start reference = 4, count = 1
Communication.........: /dev/ttyUSB2, 9600-8N2
t/o 2.00 s, poll rate 1000 ms
Data type.............: 16-bit register, output (holding) register table
[05][06][00][03][00][03][38][4F]
Waiting for a confirmation...
<05><06><00><03><00><03><38><4F>
Written 1 references.

ProxySQL


title: “ProxySQL”
date: 2022-12-08T09:15:17
slug: proxysql


apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: proxysqlcluster
namespace: squad-rtlplus-podcast
labels:
prometheus: r5s-shared
spec:
endpoints:
– path: /metrics
port: proxysql-exp
jobLabel: app
namespaceSelector:
matchNames:
– squad-rtlplus-podcast
selector:
matchLabels:
app: proxysql-exporter

Configmap:

apiVersion: v1
data:
proxysql.cnf: |
datadir=”/var/lib/proxysql”
errorlog=”/var/lib/proxysql/proxysql.log”
admin_variables=
{
admin_credentials=”admin:admin;cluster:secret”
mysql_ifaces=”0.0.0.0:6032″
refresh_interval=2000
cluster_username=”cluster”
cluster_password=”secret”
}
mysql_variables=
{
threads=4
max_connections=2048
default_query_delay=0
default_query_timeout=36000000
have_compress=true
poll_timeout=2000
interfaces=”0.0.0.0:6033;/tmp/proxysql.sock”
default_schema=”information_schema”
stacksize=1048576
server_version=”8.0.23″
connect_timeout_server=3000
monitor_username=”monitor”
monitor_password=”monitor”
monitor_history=600000
monitor_connect_interval=60000
monitor_ping_interval=10000
monitor_read_only_interval=1500
monitor_read_only_timeout=500
ping_interval_server_msec=120000
ping_timeout_server=500
commands_stats=true
sessions_sort=true
connect_retries_on_failure=10
}
proxysql_servers =
(
{ hostname = “proxysql-0.proxysqlcluster”, port = 6032, weight = 1 },
{ hostname = “proxysql-1.proxysqlcluster”, port = 6032, weight = 1 },
{ hostname = “proxysql-2.proxysqlcluster”, port = 6032, weight = 1 }
)
fluent-bit.conf: |
[SERVICE]
Flush 1
Parsers_File /etc/parsers.conf
Log_Level info
Daemon Off
[INPUT]
Name tail
Buffer_Max_Size 5MB
Buffer_Chunk_Size 256k
path /var/lib/proxysql/queries.log.*
DB /var/lib/proxysql/fluentd.db
Parser JSON
[OUTPUT]
Name stdout
Format json_lines
json_date_key timestamp
json_date_format iso8601
Match *
[FILTER]
Name modify
Match *
Rename client source
Rename duration_us duration_query
parsers.conf: |
[PARSER]
Name JSON
Format json
time_key starttime
time_format %Y-%m-%d %H:%M:%S
kind: ConfigMap
metadata:
creationTimestamp: null
name: proxysql-configmap

Terraform:

Create ProxySQL User and Password with Connections from two Subnets

resource “random_password” “password_mysql_proxysql” {
length = 16
special = false
}
resource “aws_secretsmanager_secret” “mysql-podcast_proxysql” {
name = “rtl-plus-podcast/mysql-proxysql-{{environment}}”
description = “ProxySQL Credentials {{environment}} Stage”
}
resource “aws_secretsmanager_secret_version” “mysql-podcast_proxysql” {
secret_id = aws_secretsmanager_secret.mysql-podcast_proxysql.id
secret_string = jsonencode({“user”=”mysql_proxysql_{{environment}}”, “password”=random_password.password_mysql_proxysql.result})
}
resource “mysql_user” “podcast-proxysql” {
provider = mysql.aurora-sl
user = “mysql_proxysql_{{ environment }}”
host = “{{ inventory.parameters.mysql_user_proxysql_host }}”
plaintext_password = random_password.password_mysql_proxysql.result
}
resource “mysql_grant” “podcast-proxysql_user” {
provider = mysql.aurora-sl
user = mysql_user.podcast-proxysql.user
host = mysql_user.podcast-proxysql.host
database = “*”
table = “*”
privileges = [“replication client”]
}
resource “kubernetes_config_map” “proxysql-config-sql” {
metadata {
name = “proxysql-config-sql”
namespace = “squad-rtlplus-podcast”
}
data = {
sql = templatefile(“./proxysql.sql”, {
aurora_domain = “{{ inventory.parameters.aurora_domain }}”
podcast_user_password = random_password.password_mysql_aurora.result
podcast_user_name = mysql_user.podcast-serverless-db.user
proxysql_user_password = random_password.password_mysql_proxysql.result
proxysql_user_name = mysql_user.podcast-proxysql.user
instances = aws_rds_cluster_instance.podcast-serverless-db
})
}
}

resource “null_resource” “restart_proxysql_statefulset” {

provisioner “local-exec” {

command = “kubectl rollout restart statefulset proxysql”

}

lifecycle {

replace_triggered_by = [

kubernetes_config_map.proxysql-config-sql

]

}

}

proxysql.sql

DELETE FROM mysql_aws_aurora_hostgroups;
INSERT INTO mysql_aws_aurora_hostgroups values (0,1,1,3306,'{{inventory.parameters.aurora_domain}}’,600000,1000,3000,0,1,30,30,1,'{{ environment }}’);
SELECT * FROM mysql_aws_aurora_hostgroups;
LOAD MYSQL SERVERS TO RUNTIME;
SAVE MYSQL SERVERS TO DISK;
DELETE FROM mysql_users;
INSERT INTO mysql_users(active,username,password,default_hostgroup,transaction_persistent,use_ssl) VALUES (1,’${podcast_user_name}’,’${podcast_user_password}’,0,0,1);
INSERT INTO mysql_users(active,username,password,default_hostgroup,transaction_persistent,use_ssl) VALUES (1,’root’,’FUiv8yXi7buM0KoD’,0,0,0);
SELECT * FROM mysql_users;
LOAD MYSQL USERS TO RUNTIME;
SAVE MYSQL USERS TO DISK;
DELETE FROM mysql_servers;
{% raw %}
%{ for instance in instances ~}
%{ if instance.writer == true ~}
INSERT INTO mysql_servers(hostgroup_id,hostname,port,use_ssl) VALUES (0,’${instance.endpoint}’,3306,1);
%{ else ~}
INSERT INTO mysql_servers(hostgroup_id,hostname,port,use_ssl) VALUES (1,’${instance.endpoint}’,3306,1);
%{ endif ~}
%{ endfor ~}
{% endraw %}
SELECT * FROM mysql_servers;
LOAD MYSQL SERVERS TO RUNTIME;
SAVE MYSQL SERVERS TO DISK;
UPDATE global_variables SET variable_value=’${proxysql_user_name}’ WHERE variable_name=’mysql-monitor_username’;
UPDATE global_variables SET variable_value=’${proxysql_user_password}’ WHERE variable_name=’mysql-monitor_password’;
UPDATE global_variables SET variable_value=’2000′ WHERE variable_name IN (‘mysql-monitor_connect_interval’,’mysql-monitor_ping_interval’,’mysql-monitor_read_only_interval’);
UPDATE global_variables SET variable_value=’true’ WHERE variable_name=’mysql-have_ssl’;
UPDATE global_variables SET variable_value=’0′ WHERE variable_name=’mysql-set_query_lock_on_hostgroup’;
UPDATE global_variables SET variable_value=’5000′ WHERE variable_name=’mysql-monitor_ping_timeout’;
UPDATE global_variables SET variable_value=’500′ WHERE variable_name=’mysql-throttle_connections_per_sec_to_hostgroup’;
UPDATE global_variables SET variable_value=’5000′ WHERE variable_name=’mysql-default_max_latency_ms’;
UPDATE global_variables SET variable_value=’0′ WHERE variable_name=’mysql-set_query_lock_on_hostgroup’;
UPDATE global_variables SET variable_value=’queries.log’ WHERE variable_name=’mysql-eventslog_filename’;
UPDATE global_variables SET variable_value=’2′ WHERE variable_name=’mysql-eventslog_format’;
UPDATE global_variables SET variable_value=’512′ WHERE variable_name=’mysql-query_cache_size_MB’;
SELECT * FROM global_variables;
LOAD MYSQL VARIABLES TO RUNTIME;
SAVE MYSQL VARIABLES TO DISK;
DELETE from mysql_query_rules;

INSERT INTO mysql_query_rules (active,match_digest,apply,cache_ttl) VALUES (1,’^SELECT’,1,60000);

select t0.*,t0.id from submissions as t0 where (t0.feed\_url = ?) limit ?

INSERT INTO mysql_query_rules (active,digest,destination_hostgroup,apply,cache_ttl) VALUES (1,’0x0C80CCC93E4A6477′,1,1,60000);

select t0.id from episodes as t0 where (t0.uid = ?) limit ?

INSERT INTO mysql_query_rules (active,digest,destination_hostgroup,apply,cache_ttl) VALUES (1,’0x78DDF3A07A7B6E97′,1,1,60000);

select count(*) as count from episodes as t0 limit ?

INSERT INTO mysql_query_rules (active,digest,destination_hostgroup,apply,cache_ttl) VALUES (1,’0x4E91709EE1214CA5′,1,1,60000);

select count(*) as count from podcasts as t0 limit ?

INSERT INTO mysql_query_rules (active,digest,destination_hostgroup,apply,cache_ttl) VALUES (1,’0xB8ED2A71067C2EE5′,1,1,60000);

SELECT ?

INSERT INTO mysql_query_rules (active,digest,destination_hostgroup,apply,cache_ttl) VALUES (1,’0x1C46AE529DD5A40E’,1,1,60000);

select distinct t0.*,t1.podcast\_id from episodes as t0 left join episodes\_podcast\_links as t1 on t0.id = t1.episode\_id where (t1.podcast\_id in (?))

INSERT INTO mysql_query_rules (active,digest,destination_hostgroup,apply,cache_ttl) VALUES (1,’0xA8CF5BFA3D088DFA’,1,1,60000);

INSERT INTO mysql_query_rules (active,digest,destination_hostgroup,apply) VALUES (1,’0x970F45504162F173′,0,1);

INSERT INTO mysql_query_rules (active,digest,destination_hostgroup,apply) VALUES (1,’0x23C7F2C66F50F4A0′,0,1);

INSERT INTO mysql_query_rules (active,digest,destination_hostgroup,apply) VALUES (1,’0xD2247BD720196139′,0,1);

INSERT INTO mysql_query_rules (active,digest,destination_hostgroup,apply) VALUES (1,’0x3465337B476BD70F’,0,1);

INSERT INTO mysql_query_rules (active,digest,destination_hostgroup,apply) VALUES (1,’0xCDC03F9ECEFE25F0′,0,1);

INSERT INTO mysql_query_rules (active,match_digest,destination_hostgroup,apply) VALUES (1,’^SELECT.*FOR UPDATE’,0,1), (1,’^SELECT’,1,1);

INSERT INTO mysql_query_rules (rule_id,active,digest,destination_hostgroup,apply,cache_ttl) VALUES (1,1,’0x357FE2F04F7B1185′,1,1,6000);

INSERT INTO mysql_query_rules (rule_id,active,digest,destination_hostgroup,apply,cache_ttl) VALUES (2,1,’0x7C0DB66C3A8F048D’,1,1,6000);

INSERT INTO mysql_query_rules (rule_id,active,digest,destination_hostgroup,apply,cache_ttl) VALUES (3,1,’0x7346A6D7423B7B87′,1,1,6000);

INSERT INTO mysql_query_rules (rule_id,active,digest,destination_hostgroup,apply,cache_ttl) VALUES (4,1,’0x9210A11FA3CFB6C7′,1,1,6000);

INSERT INTO mysql_query_rules (rule_id,active,digest,destination_hostgroup,apply,cache_ttl) VALUES (5,1,’0x35A086C5312A7AA9′,1,1,6000);

INSERT INTO mysql_query_rules (rule_id,active,digest,destination_hostgroup,apply,cache_ttl) VALUES (6,1,’0xD5B76CB799A8EB07′,1,1,6000);

INSERT INTO mysql_query_rules (rule_id,active,digest,destination_hostgroup,apply,cache_ttl) VALUES (7,1,’0xCECC5BDAB513EB4A’,1,1,6000);

INSERT INTO mysql_query_rules (rule_id,active,digest,destination_hostgroup,apply,cache_ttl) VALUES (8,1,’0xD0DA41F6615CBD24′,1,1,6000);

INSERT INTO mysql_query_rules (rule_id,active,digest,destination_hostgroup,apply,cache_ttl) VALUES (9,1,’0xF06765D077F9D71B’,1,1,6000);

INSERT INTO mysql_query_rules (rule_id,active,digest,destination_hostgroup,apply,cache_ttl) VALUES (10,1,’0x7631E5190F85279E’,1,1,6000);

INSERT INTO mysql_query_rules (rule_id,active,digest,destination_hostgroup,apply,cache_ttl) VALUES (11,1,’0x55F97DEBD03DFBBD’,1,1,6000);

LOAD MYSQL QUERY RULES TO RUNTIME;
SAVE MYSQL QUERY RULES TO DISK;
select * from mysql_query_rules;
SELECT * FROM stats.stats_mysql_connection_pool;
SELECT * FROM monitor.mysql_server_connect_log ORDER BY time_start_us DESC LIMIT 3;
SELECT * FROM monitor.mysql_server_ping_log ORDER BY time_start_us DESC LIMIT 3;

“`

AWS Gitlab Runner Launch Template


title: “AWS Gitlab Runner Launch Template”
date: 2022-11-24T09:36:52
slug: aws-gitlab-runner-launch-template


#!/bin/bash
# export environment variables from JSON
# these can be used by all subsequent programs
apt-get update
apt-get -y install jq
for s in $(echo '{}' | jq -r "to\_entries|map(\"\(.key)=\(.value|tostring)\")|.[]" ); do
export $s
done
echo "10.98.195.195 gitlab.netrtl.com" >> /etc/hosts
curl -L https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.deb.sh | bash
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb\_release -cs) stable"
apt-get update
apt-get -y install docker-ce=5:19.03.14~3-0~ubuntu-bionic docker-ce-cli=5:19.03.14~3-0~ubuntu-bionic gitlab-runner python3-pip
pip3 install awscli==1.16.59
usermod -aG docker gitlab-runner
usermod -aG docker ubuntu
echo '#!/bin/sh' > /etc/cron.daily/docker
echo 'docker system prune --all --volumes --force' >> /etc/cron.daily/docker
chmod +x /etc/cron.daily/docker
echo '#!/bin/sh' > /etc/cron.hourly/refresh-ecr-tokens
# The central registry which sits in eu-west-1.
echo "/usr/local/bin/aws ecr get-login --no-include-email --region eu-west-1 --registry-ids 922307086101 | sh -" >> /etc/cron.hourly/refresh-ecr-tokens
# Extra registries in eu-central-1.
chmod +x /etc/cron.hourly/refresh-ecr-tokens
DOCKER\_LOGIN=''
if [ -n "${DOCKER\_LOGIN}" ]; then
IFS=',' read -r -a DOCKER\_LOGIN\_ARR <<< "${DOCKER\_LOGIN}"
for DOCKER\_LOGIN\_ITEM in "${DOCKER\_LOGIN\_ARR[@]}"; do
echo LOGIN ITEM: "${DOCKER\_LOGIN\_ITEM}"
IFS='|' read -r -a LOGIN\_ARR <<< "${DOCKER\_LOGIN\_ITEM}"
if [ ${#LOGIN\_ARR[@]} -eq "3" ]; then
USER="${LOGIN\_ARR[0]}"
PASS="${LOGIN\_ARR[1]}"
REGISTRY="${LOGIN\_ARR[2]}"
docker login --username "${USER}" --password "${PASS}" "${REGISTRY}"
else
echo "could not parse login: ${DOCKER\_LOGIN\_ITEM}"
echo "expected \"{USER}|{PASS}|{REGISTRY}\""
fi
done
fi
# Login to registries.
bash /etc/cron.hourly/refresh-ecr-tokens
gitlab-runner register --non-interactive --locked=false --url "https://gitlab.netrtl.com/" \
--registration-token "z3gsKxs4\_-e79bV4keyX" --description "runner podcast-squad-embed-player-$(hostname)" --executor docker \
--tag-list "env-preprod,podcast-squad-embed-player" --docker-image "ubuntu:18.04" \
--access-level="not\_protected" \
--docker-volumes /var/run/docker.sock:/var/run/docker.sock \
--docker-pull-policy always --docker-extra-hosts "gitlab.netrtl.com:10.98.195.195" \
--limit 0
if false
then
sed -i -e 's/privileged = false/privileged = true/' /etc/gitlab-runner/config.toml
fi
if false
then
sed -i -e '/\[session\_server\]/a\ \ listen\_address = "0.0.0.0:8093"' /etc/gitlab-runner/config.toml
fi
systemctl restart gitlab-runner
systemctl enable gitlab-runner
docker run -d --restart='always' --name=node\_exporter --net='host' --pid='host' -v '/:/host:ro,rslave' quay.io/prometheus/node-exporter:v0.16.0 --path.procfs /host/proc --path.sysfs /host/sys
docker run -d --restart='always' --name=cadvisor -v '/:/rootfs:ro' -v '/var/run:/var/run:ro' -v '/sys:/sys:ro' -v '/var/lib/docker/:/var/lib/docker:ro' -v '/dev/disk/:/dev/disk:ro' -p 8080:8080 google/cadvisor:v0.32.0