Author Archives: admin

Spamassassin support

Spamassassin mit spamc installieren:

“`bash
apt-get install spamassassin spamc
“`

In /etc/postfix/master.cf folgende Zeile editieren
(“-o content_filter=spamassassin” hinzufügen, das “-v” kann nach erfolgreicher Installation entfernt werden):

“`text
smtp inet n – – – – smtpd -v
-o content_filter=spamassassin
“`

Und am Ende einfügen:

“`text
spamassassin unix – n n – – pipe
user=debian-spamd argv=/usr/bin/spamc -f -e /usr/sbin/sendmail -oi -f ${sender} ${recipient}
“`

Dann Postfix neu starten

“`bash
/etc/init.d/postfix restart
“`

Spamd in /etc/default/spamassassin folgendes einfügen

“`text
ENABLED=1
SAHOME=”/var/lib/spamassassin/”
OPTIONS=”–create-prefs –max-children 5 –username debian-spamd –helper-home-dir ${SAHOME} -s /var/lib/spamassassin/spamd.log”
PIDFILE=”${SAHOME}spamd.pid”
“`

Spamassassin neu starten

“`bash
/etc/init.d/spamassassin restart
“`

Testen ob spamd läuft

“`bash
ps aux | grep spamd
root 22759 1.3 0.3 125344 55520 ? Ss 13:26 0:01 /usr/sbin/spamd –create-prefs –max-children 5 –helper-home-dir -d –pidfile=/var/run/spamd.pid
“`

Die user_prefs befindet sich in: /var/lib/spamassassin/.spamassassin — hier können folgende Einstellungen vorgenommen werden:

Ab wann eine Mail als Spam deklariert wird:

“`text
required_score 10
“`

Bestimmte Empfänger-Domains nicht auf Spam prüfen:

“`text
all_spam_to *@domain1.de
all_spam_to *@domain2.net
“`

Bestimmte Absender-Domains blacklisten:

“`text
blacklist_from *@mxkli.com
“`

Setup Packetbeat


title: “Setup Packetbeat”
date: 2024-02-08T14:26:14
slug: setup-packetbeat


Kurzinfo

Dieser Beitrag dokumentiert eine Beispielkonfiguration für Packetbeat. Unten stehen die relevanten Ausschnitte aus packetbeat.yml und ein paar Testkommandos.

Konfiguration (Ausschnitt)

# cat /etc/packetbeat/packetbeat.yml | grep -v '#' | grep -v '^$'
packetbeat.interfaces.device: any
packetbeat.interfaces.poll\\_default\\_route: 1m
packetbeat.interfaces.internal\\_networks:
- private
packetbeat.flows:
timeout: 30s
period: 10s
packetbeat.protocols:
- type: icmp
enabled: false
- type: amqp
- type: cassandra
- type: dhcpv4
- type: dns
ports: [53]
- type: http
ports: [80, 8080, 8000, 5000, 8002]
- type: memcache
ports: [11211]
- type: mysql
ports: [3306, 3307]
- type: pgsql
ports: [5432]
- type: redis
ports: [6379]
- type: thrift
ports: [9090]
- type: mongodb
ports: [27017]
- type: nfs
ports: [2049]
- type: tls
ports:
- 8443
- type: sip
ports: [5060]
setup.template.settings:
index.number\\_of\\_shards: 1
setup.dashboards.enabled: true
setup.kibana:
host: "http://192.168.178.195:5601"
output.elasticsearch:
hosts: ["192.168.178.195:9200"]
preset: balanced
processors:
- add\\_host\\_metadata: ~
- add\\_cloud\\_metadata: ~
- add\\_docker\\_metadata: ~
- detect\\_mime\\_type:
field: http.request.body.content
target: http.request.mime\\_type
- detect\\_mime\\_type:
field: http.response.body.content
target: http.response.mime\\_type

Tests

packetbeat test config
packetbeat test output
packetbeat setup

Deploy inly from default branch


title: “Deploy inly from default branch”
date: 2024-01-09T10:43:05
slug: deploy-inly-from-default-branch


Kurzinfo

Dieses Snippet zeigt eine GitLab-CI-Regel, die Jobs nur dann ausführt, wenn der aktuelle Branch dem Default-Branch entspricht.

Beispiel (Ausschnitt aus .gitlab-ci.yml):

- k8s-plan-prod:
dependencies:
- kapitan-compile-prod
rules:
- if: $CI\\_COMMIT\\_BRANCH == $CI\\_DEFAULT\\_BRANCH

Git Reabse mit other (main) Branch


title: “Git Reabse mit other (main) Branch”
date: 2023-12-06T12:10:06
slug: git-reabse-mit-other-main-branch


Kurzinfo

Kurzer Ablauf, um ein Branch gegen origin/main neu zu rebasen und anschließend die Änderungen sicher zu pushen.

git fetch --all
git rebase origin/main
git push --force-with-lease

Screenshot (ursprünglich eingebettet):

Screenshot

Update last commit


title: “Update last commit”
date: 2023-10-09T13:34:34
slug: update-last-commit


Kurzinfo

Schnelle Befehle, um den letzten Commit zu überarbeiten und anschließend zu pushen.

# interaktiv Änderungen zur Staging-Area hinzufügen
git add -p
# den letzten Commit ändern
git commit --amend
# und das Ergebnis sicher pushen
git push --force-with-lease

OGST | odroid Gamestation Turbo


title: “OGST | odroid Gamestation Turbo”
date: 2023-08-09T10:35:09
slug: ogst-odroid-gamestation-turbo


Kurzinfo

Hinweise zu alten Debian Jessie Paketquellen und einer Beispielkonfiguration für apt-Einstellungen, die Check-Valid-Until deaktiviert.

Quellen (Beispiel):

cat /etc/apt/sources.list
deb http://archive.debian.org/debian jessie main contrib non-free
deb http://archive.debian.org/debian jessie-backports main contrib non-free
deb http://archive.debian.org/debian-security/ jessie/updates main non-free contrib
cat /etc/apt/apt.conf.d/10-nocheckvalid
Acquire::Check-Valid-Until false;
cat /etc/apt/sources.list.d/deb-multimedia.list
deb [check-valid-until=no] https://archive.deb-multimedia.org jessie main non-free
# Beispiel: Abschalten der Peer-Überprüfung (nur als Hinweis, nicht empfohlen):
Acquire::https::archive.deb-multimedia.org::Verify-Peer "false";
Acquire::https::oph.mdrjr.net::Verify-Peer "false";

git rebase (nur ein Commit)


title: “git rebase (nur ein Commit)”
date: 2023-07-04T13:08:43
slug: git-rebase-nur-ein-commit


1381  Jul 04 14:59:06 git status
1382  Jul 04 14:59:20 git add refs/sandbox/byok/g11r-sandbox-ebs

1383  Jul 04 15:00:02 git commit –fixup HEAD^
1384  Jul 04 15:00:27 git rebase -i main

1385  Jul 04 15:02:09 git rebase -i main (leten commit nach oben verschieben)

1386  Jul 04 15:03:35 git log
1387  Jul 04 15:04:20 git push -f
1388  Jul 04 15:05:08 history

Visual Studio
git rebase -i origin/main
Alterster Commit: pick
Alle anderen: fixup

Dann: git push –force-with-lease

Es kann noch die commit message angepasst werden indem man den überbliebenen commit mit “reword” rebased

VictoriaMetrics


title: “VictoriaMetrics”
date: 2023-06-01T14:30:08
slug: victoriametrics


What makes VictoriaMetrics the next leading choice for open-source monitoring

Amit Karni

Everything Full Stack

Amit Karni

·

Follow

Published in

Everything Full Stack

·

6 min read

·

May 10, 2022

727

2

During the last years, the de-facto standard choice for open-source monitoring has been the Prometheus-stack combined with Grafana, Alermanager, and various types of exporters.
At the time, it was a pretty decent stack.
Today, though, in a fast-growing ecosystem, it has problems:

Recently, I was asked to review, design and deploy a monitoring solution to switch/extend the current Prometheus stack.
The solution should be high-performancehighly-available, cheap, redundant, scalable, backupable, and can store high data retention.

After researching a few solutions like ThanosCortexGrafana-Mimir, and VictoriaMetrics.
It’s clear to say that in my opinion, VictoriaMetrics is the winnerand the best fit for my purposes & needs.

Why VictoriaMetrics?

WhileThanos,Cortex,andGrafanaMimirare designed toextendthe old Prometheus stack with HA and long-term storage capabilities.

VictoriaMetrics seems to take Prometheus Stack and break it into micro-services architecture using stronger and better new components.It has high availability built-in, as well as superior performance & data compression compared to the Prometheus stack, Scaling is very easy since every component is separate and most of the components are stateless, which means it can be designed to run on spot nodes and reduce costs.

VictoriaMetrics cluster Architecture

VictoriaMetrics can be deployed as a single server or as a cluster version, I chose to deploy the VictoriaMetrics-cluster on k8s. (using Helm-charts)

  • vmstorage: stores the raw data and returns the queried data on the given time range for the given label filters. This is the only stateful component in the cluster.
  • vminsert: accepts the ingested data and spreads it among vmstorage nodes according to consistent hashing over metric name and all its labels.
  • vmselect: performs incoming queries by fetching the needed data from all the configured vmstoragenodes
  • vmauth: is a simple auth proxy, router for the cluster. It reads auth credentials from the Authorization HTTP header (Basic AuthBearer token, and InfluxDB authorization is supported), matches them against configs, and proxies incoming HTTP requests to the configured targets.
  • vmagent: is a tiny but mighty agent which helps you collect metrics from various sources and store them in VictoriaMetrics or any other Prometheus-compatible storage systems that support the remote_write protocol.
  • vmalert: executes a list of the given alerting or recording rules against configured data sources. Sending alerting notifications vmalert relies on configured Alertmanager. Recording rules results are persisted via remote write protocol. vmalert is heavily inspired by Prometheus implementation and aims to be compatible with its syntax
  • promxy: used for querying the data from multiple clusters. It’s Prometheus proxy that makes many shards of Prometheus appear as a single API endpoint to the user.

Cluster resizing and scalability

Cluster performance and capacity can be scaled up in two ways:

  • By adding more resources (CPU, RAM, disk IO, disk space, etc..) AKA vertical scalability.
  • By adding more of each component to the cluster AKA horizontal scalability.

The components can all be scaled individually, the only stateful component is the vmstorage component.
Therefore, it’s easier to maintain and scale clusters.
Now, adding new components and updating vminsert configurations is all it takes to scale the storage layer. Nothing else is needed.

Built-in High Availablity

By using the clustered version of VictoriaMetrics, redundancy and auto-healing are built into each component.
Even when some cluster components are temporarily unavailable, the system can continue to accept new incoming data and process new queries.

  • vminsert re-routes incoming data from unavailable vmstorage nodes to healthy vmstorage nodes

Additionally, data is replicated across multiple nodes within the cluster which makes it also redundant.The cluster remains available if at least a single vmstorage node exists

Disaster Recovery best-practice

For better cluster performance, VictoriaMetrics recommends that all components run within the same subnet network (same availability zone) for high bandwidth and low latency.

To achieve DR following VictoriaMetrics best practice, we can run multiple clusters on different AZs or regions, each AZ or region has its own cluster.
*It is necessary to configure vmagentto send data to all clusters.

In the event of an entire AZ/Region going down, Route53 failover and/or Promxy failover can still be used to read and write to other online clusters in another AZ/Region.
As soon as the AZ/region is online again, vmagentwill send its cached data back into that cluster.

Backup & Restore

vmbackup creates VictoriaMetrics data backups from instant snapshots.

Supported storage systems for backups:

  • GCS. Example: gs://<bucket>/<path/to/backup>
  • S3. Example: s3://<bucket>/<path/to/backup>
  • Any S3-compatible storage such as MinIOCeph or Swift. See these docs for details.
  • Local filesystem. Example: fs://</absolute/path/to/backup>. Note that vmbackup prevents from storing the backup into the directory pointed by -storageDataPath command-line flag, since this directory should be managed solely by VictoriaMetrics or vmstorage.

vmbackup supports incremental and full backups. Incremental backups are created automatically if the destination path already contains data from the previous backup.
Full backups can be sped up with -origin pointing to an already existing backup on the same remote storage. In this case vmbackup makes a server-side copy of the shared data between the existing backup and the new backup. It saves time and costs on data transfer.

The backup process can be interrupted at any time, It is automatically resumed from the interruption point when restarting vmbackup with the same args.

Backed-up data can be restored with vmrestore.

Summarizing

In this post, I share the features that I find most interesting, but there are many more that may be of interest to others (multitenancy, for example).

VictoriaMetrics team has done an amazing job redesigning a monitoring tool that uses the commonly used Prometheus-stack monitoring platform but builds-in changes that are appropriate & necessary using micro-services components.

VictoriaMetrics is a fast and scalable open-source time-series database and monitoring solution that lets users build a monitoring platform without scalability issues and minimal operational burden.

assume role mit aws cli


title: “assume role mit aws cli”
date: 2023-05-10T08:13:41
slug: assume-role-mit-aws-cli


Die Role die man anziehen will: arn:aws:iam::224945782113:role/SystemsSquad

aws sts assume-role --role-arn arn:aws:iam::224945782113:role/SystemsSquad --role-session-name tasanger-test

Output:

{
"Credentials": {
"AccessKeyId": "ASIATIX6QHVQ6M4KNHHD",
"SecretAccessKey": "Gg6aYXn1Z0MGUmdewe/niVAy/Y6m6uI8pGZEkSbD",
"SessionToken": "IQoJb3JpZ2luX2VjEHAaDGV1LWNlbnRyYWwtMSJHMEUCIGHsS/akNvXCeB4474tbe1zzvPkxVV33HNyyR9RS8bbPAiEAyj/g8Y4UprtqsTllv1xMK+aikB75CFiNFzWzUxExT5YqowIIif//////////ARADGgwyMjQ5NDU3ODIxMTMiDMQW2hW9h8TCzhzwbCr3ASMdjECSqBaBtDW/kk4RAMM3EC9EExhMa1KlP5+gk5Xh3si7J3shISRp95sMQM8m3xX3R6k1n/84V7LS4HtcnHlPa3piJp6vVL5mPFkGeT9k3hr66ueP1j8olD5khiZRAaMZP36FVk9/cgFXl00jPjIRTl0ZMj0VlFJHNCur8cwSH7y0Xs6cNhPFHBCu0rBQJHCOElphkDiQqaBMwNHTPUXnXL7YgFGnAaf7AwkyjAuQRrJ3/yruuJe2bb8XBlwYuL097m08IRdCcnJ5V3uNxfiGO2pJaizXKWcnAZJSg1gCtXZkTWpgd264rm+tOOQPj0MWyhWDIWQwuqPtogY6nQHZwRbTptHIsl/SnGWfRd+P0sBsyr2noRx//9S/UwrXNUh2wqzG2hX9LuPr9kXhj+pQRcWN8wSkWQAeCSOUUi8pD9gtfmbCNQopkZhly89IDBXq19UYFVCMRlKjC+yHcQBM6mpH++aNuRI5630EyzKFOYR6FNJ5OCiNrXHx2DAbfkXTIDObZa656j8wHLQJ4I2+ugUubgCkPbbALJfX",
"Expiration": "2023-05-10T09:11:38+00:00"
},
"AssumedRoleUser": {
"AssumedRoleId": "AROATIX6QHVQXONL4U6E2:tasanger-test",
"Arn": "arn:aws:sts::224945782113:assumed-role/SystemsSquad/tasanger-test"
}
}

In ~/.aws/credentials eintragen:

[tasanger-test]
aws\_access\_key\_id=ASIATIX6QHVQ6M4KNHHD
aws\_secret\_access\_key=Gg6aYXn1Z0MGUmdewe/niVAy/Y6m6uI8pGZEkSbD
aws\_session\_token=IQoJb3JpZ2luX2VjEHAaDGV1LWNlbnRyYWwtMSJHMEUCIGHsS/akNvXCeB4474tbe1zzvPkxVV33HNyyR9RS8bbPAiEAyj/g8Y4UprtqsTllv1xMK+aikB75CFiNFzWzUxExT5YqowIIif//////////ARADGgwyMjQ5NDU3ODIxMTMiDMQW2hW9h8TCzhzwbCr3ASMdjECSqBaBtDW/kk4RAMM3EC9EExhMa1KlP5+gk5Xh3si7J3shISRp95sMQM8m3xX3R6k1n/84V7LS4HtcnHlPa3piJp6vVL5mPFkGeT9k3hr66ueP1j8olD5khiZRAaMZP36FVk9/cgFXl00jPjIRTl0ZMj0VlFJHNCur8cwSH7y0Xs6cNhPFHBCu0rBQJHCOElphkDiQqaBMwNHTPUXnXL7YgFGnAaf7AwkyjAuQRrJ3/yruuJe2bb8XBlwYuL097m08IRdCcnJ5V3uNxfiGO2pJaizXKWcnAZJSg1gCtXZkTWpgd264rm+tOOQPj0MWyhWDIWQwuqPtogY6nQHZwRbTptHIsl/SnGWfRd+P0sBsyr2noRx//9S/UwrXNUh2wqzG2hX9LuPr9kXhj+pQRcWN8wSkWQAeCSOUUi8pD9gtfmbCNQopkZhly89IDBXq19UYFVCMRlKjC+yHcQBM6mpH++aNuRI5630EyzKFOYR6FNJ5OCiNrXHx2DAbfkXTIDObZa656j8wHLQJ4I2+ugUubgCkPbbALJfX
aws\_expiration=2023-05-10T17:05:00.000Z

Temporäres Profil in ~/.aws/config anlegen:

[profile tasanger-test]
region=eu-central-1

aws cli mit temporären Profil aufrufen:

aws --profile tasanger-test cloudwatch list-metrics --namespace AWS/WAFV2