Author Archives: admin

How do you create the first user in Cassandra DB


title: “How do you create the first user in Cassandra DB”
date: 2016-09-27T08:06:01
slug: how-do-you-create-the-first-user-in-cassandra-db


You need to enable PasswordAuthenticator in cassandra.yaml file. To enable PasswordAuthenticator you need to change authenticator property in cassandra.yaml

Change

authenticator: AllowAllAuthenticator

to

authenticator: PasswordAuthenticator

After that login with following command and then you will be able to add new user

cqlsh -u cassandra -p cassandra

Once you get in, your first task should be to create another super user account.

CREATE USER dba WITH PASSWORD 'bacon' SUPERUSER;

Next, it is a really good idea to set the current Cassandra super user’s password to something else…preferably something long and incomprehensible. With your new super user, you shouldn’t need the default Cassandra account again.

ALTER USER cassandra WITH PASSWORD 'dfsso67347mething54747long67a7ndincom4574prehensi

reset Cassandra superuser password


title: “reset Cassandra superuser password”
date: 2016-09-27T08:03:37
slug: reset-cassandra-superuser-password


Turn off authorisation and authentication :

edit cassandra.yaml and set the following:

authenticator: AllowAllAuthenticator
authorizer: AllowAllAuthorizer

bounce cassandra

service cassandra restart

Fire up the cli

cqlsh

At this point you need to identify you superusers. If you were a good girl/boy, you would have set up a fresh superuser and dumped the default cassandra user.

list users;

 name | super
-----------+-------
 cassandra | False
 myadmin | True

As you can see, I’ve taken super privs away from the default superuser cassandra and created my own called myadmin as per the recommendations of the docs.

Now, depending on how many nodes and data centers you have, the system_auth keyspace is likely to be replicated on other nodes and specifically the credentials column family. You need to manually update this table to get back into shape, as this saves you the hassle of having to visit all nodes in your cluster and reset authentication as above.

Type in the following:

update system\_auth.credentials set salted\_hash='$2a$10$vbfmLdkQdUz3Rmw.fF7Ygu6GuphqHndpJKTvElqAciUJ4SZ3pwquu' where username='myadmin' ;

Revert the cassandra.yaml:

authenticator: PasswordAuthenticator
authorizer: CassandraAuthorizer

and restart

service cassandra restart

Now you can log in with:

cqlsh -u myadmin -p cassandra

Once logged, reset your password to something less obvious.

Kubernetes Master on Coreos


title: “Kubernetes Master on Coreos”
date: 2016-09-20T08:31:55
slug: kubernetes-master-on-coreos


vi /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE="eth1"
BOOTPROTO=none
ONBOOT=yes
NETWORK=172.31.4.0
NETMASK=255.255.252.0
IPADDR=172.31.4.5
USERCTL=no
mkdir /etc/ssl/kubernetes
scp root@hostingvalley.de:/root/certs/apiserver.pem /etc/ssl/kubernetes
scp root@hostingvalley.de:/root/certs/apiserver-key.pem /etc/ssl/kubernetes
scp root@hostingvalley.de:/root/certs/ca.pem /etc/ssl/kubernetes

Install etcd and Kubernetes through yum:

yum -y install etcd kubernetes

/etc/etcd/etcd.conf

ETCD\_NAME=default
ETCD\_DATA\_DIR="/var/lib/etcd/default.etcd"
ETCD\_LISTEN\_CLIENT\_URLS="http://0.0.0.0:2379"
ETCD\_ADVERTISE\_CLIENT\_URLS="http://localhost:2379"

/etc/kubernetes/apiserver

KUBE\_API\_ADDRESS="--address=0.0.0.0"
KUBE\_API\_PORT="--port=8080"
KUBELET\_PORT="--kubelet\_port=10250"
KUBE\_ETCD\_SERVERS="--etcd\_servers=http://127.0.0.1:2379"
KUBE\_SERVICE\_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
KUBE\_ADMISSION\_CONTROL="--admission\_control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
KUBE\_API\_ARGS="--tls-cert-file='/etc/ssl/kubernetes/apiserver.pem' --tls-private-key-file='/etc/ssl/kubernetes/apiserver-key.pem' --secure-port=443"

vi /etc/kubernetes/controller-manager

KUBE\_CONTROLLER\_MANAGER\_ARGS="--service-account-private-key-file=/etc/ssl/kubernetes/apiserver-key.pem --root-ca-file=/etc/ssl/kubernetes/ca.pem"

Start and enable etcd, kube-apiserver, kube-controller-manager and kube-scheduler:

for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do
 systemctl restart $SERVICES
 systemctl enable $SERVICES
 systemctl status $SERVICES
done
etcdctl mk /coreos.com/network/config '{"Network":"10.2.0.0/16"}'
curl -H "Content-Type: application/json" -XPOST -d'{"apiVersion":"v1","kind":"Namespace","metadata":{"name":"kube-system"}}' "http://127.0.0.1:8080/api/v1/namespaces"

Test Postfox SSL Certificate


title: “Test Postfox SSL Certificate”
date: 2016-08-26T14:40:55
slug: test-postfox-ssl-certificate


openssl s\_client -starttls smtp -crlf -connect mx.test.net:465

With CA file:

openssl s\_client -CApath cert\_kubernetes\_new/ca.pem -starttls smtp -crlf -connect mx.test.net:465

Relayhost with only one Relay


title: “Relayhost with only one Relay”
date: 2016-08-25T19:39:32
slug: relayhost-with-only-one-relay


Setup Relay Host Port and SMTP Authentication Client in Postfix

JUNE 21, 2008 BY WINGLOON·3 COMMENTS

This setup will help you to route all outgoing email through your ISP SMTP server using different port number and that SMTP server requires you to authenticate before relaying. For this scenario, the ISP SMTP server is Exim.

  1. Edit this file /etc/postfix/main.cf and add relayhost to point to your ISP SMTP server with port number as below: –

relayhost = mail.example.com:2525

  1. Add the next parameter to allow Postfix to authenticate before relaying outgoing email as below: –

smtp_sasl_auth_enable = yes

  1. Add the following line below to map authentication information as below: –

smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd

  1. Then, add the next parameter to force Postfix to use AUTH LOGIN as below: –

smtp_sasl_mechanism_filter = login

  1. Create a file /etc/postfix/sasl_passwd and put the authentication information as below: –

mail.example.com username@example.com:password

  1. Next, run the following command below to create the lookup table: –

postmap /etc/postfix/sasl_passwd

  1. Then, restart your Postfix service as below: –

service postfix restart

Relayhost with authentication


title: “Relayhost with authentication”
date: 2016-08-18T15:11:43
slug: relayhost-with-authentication


vi /etc/postfix/sender\_relay
support@domainvalley.de [smtprelaypool.ispgateway.de]

vi /etc/postfix/sasl\_passwd
@domainvalley.de support@domainvalley.de:PASSWORD

postmap /etc/postfix/sender\_relay
postmap /etc/postfix/sasl\_passwd

/etc/postfix/main.cf

smtp\_sender\_dependent\_authentication = yes
sender\_dependent\_relayhost\_maps = hash:/etc/postfix/sender\_relay
smtp\_sasl\_password\_maps=hash:/etc/postfix/sasl\_passwd
smtp\_sasl\_auth\_enable=yes
smtp\_sasl\_security\_options = noanonymous

Persistent Storage


title: “Persistent Storage”
date: 2016-03-27T15:41:45
slug: persistent-storage


Attach a Host Directory to the Container

Create a volume container:

docker create -v /dbdata --name dbstore ubuntu /bin/true

Start a container with the volume container attached

docker run -d --volumes-from dbstore --name ubuntu ubuntu

Start a second container with the same volume container (shared)

docker run -d --volumes-from dbstore --name ubuntu2 ubuntu

You can use multiple --volumes-from parameters

Make a backup from the volume

docker run --rm --volumes-from ubuntu -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /dbdata

Restore the backup into a container

  1. Create a container with /dbdata as a storage point
docker run -v /dbdata --name dbstore2 ubuntu /bin/bash
  1. Mount the /dbdata Storage point to a new container, mount the local directory to /backup into the container (this includes backup.tar.gz)  and extarct it in /dbdata
docker run --rm --volumes-from dbstore2 -v $(pwd):/backup ubuntu bash -c "cd /dbdata && tar xvf /backup/backup.tar --strip 1"