title: “Change database collation”
date: 2016-10-06T11:53:09
slug: change-database-collation
ALTER DATABASE dbname CHARACTER SET utf8 COLLATE utf8_unicode_ci;
title: “Change database collation”
date: 2016-10-06T11:53:09
slug: change-database-collation
ALTER DATABASE dbname CHARACTER SET utf8 COLLATE utf8_unicode_ci;
title: “How do you create the first user in Cassandra DB”
date: 2016-09-27T08:06:01
slug: how-do-you-create-the-first-user-in-cassandra-db
You need to enable PasswordAuthenticator in cassandra.yaml file. To enable PasswordAuthenticator you need to change authenticator property in cassandra.yaml
Change
authenticator: AllowAllAuthenticator
to
authenticator: PasswordAuthenticator
After that login with following command and then you will be able to add new user
cqlsh -u cassandra -p cassandra
Once you get in, your first task should be to create another super user account.
CREATE USER dba WITH PASSWORD 'bacon' SUPERUSER;
Next, it is a really good idea to set the current Cassandra super user’s password to something else…preferably something long and incomprehensible. With your new super user, you shouldn’t need the default Cassandra account again.
ALTER USER cassandra WITH PASSWORD 'dfsso67347mething54747long67a7ndincom4574prehensi
title: “reset Cassandra superuser password”
date: 2016-09-27T08:03:37
slug: reset-cassandra-superuser-password
Turn off authorisation and authentication :
edit cassandra.yaml and set the following:
authenticator: AllowAllAuthenticator
authorizer: AllowAllAuthorizer
bounce cassandra
service cassandra restart
Fire up the cli
cqlsh
At this point you need to identify you superusers. If you were a good girl/boy, you would have set up a fresh superuser and dumped the default cassandra user.
list users;
name | super
-----------+-------
cassandra | False
myadmin | True
As you can see, I’ve taken super privs away from the default superuser cassandra and created my own called myadmin as per the recommendations of the docs.
Now, depending on how many nodes and data centers you have, the system_auth keyspace is likely to be replicated on other nodes and specifically the credentials column family. You need to manually update this table to get back into shape, as this saves you the hassle of having to visit all nodes in your cluster and reset authentication as above.
Type in the following:
update system\_auth.credentials set salted\_hash='$2a$10$vbfmLdkQdUz3Rmw.fF7Ygu6GuphqHndpJKTvElqAciUJ4SZ3pwquu' where username='myadmin' ;
Revert the cassandra.yaml:
authenticator: PasswordAuthenticator
authorizer: CassandraAuthorizer
and restart
service cassandra restart
Now you can log in with:
cqlsh -u myadmin -p cassandra
Once logged, reset your password to something less obvious.
title: “Kubernetes Master on Coreos”
date: 2016-09-20T08:31:55
slug: kubernetes-master-on-coreos
vi /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE="eth1"
BOOTPROTO=none
ONBOOT=yes
NETWORK=172.31.4.0
NETMASK=255.255.252.0
IPADDR=172.31.4.5
USERCTL=no
mkdir /etc/ssl/kubernetes
scp root@hostingvalley.de:/root/certs/apiserver.pem /etc/ssl/kubernetes
scp root@hostingvalley.de:/root/certs/apiserver-key.pem /etc/ssl/kubernetes
scp root@hostingvalley.de:/root/certs/ca.pem /etc/ssl/kubernetes
Install etcd and Kubernetes through yum:
yum -y install etcd kubernetes
/etc/etcd/etcd.conf
ETCD\_NAME=default
ETCD\_DATA\_DIR="/var/lib/etcd/default.etcd"
ETCD\_LISTEN\_CLIENT\_URLS="http://0.0.0.0:2379"
ETCD\_ADVERTISE\_CLIENT\_URLS="http://localhost:2379"
/etc/kubernetes/apiserver
KUBE\_API\_ADDRESS="--address=0.0.0.0"
KUBE\_API\_PORT="--port=8080"
KUBELET\_PORT="--kubelet\_port=10250"
KUBE\_ETCD\_SERVERS="--etcd\_servers=http://127.0.0.1:2379"
KUBE\_SERVICE\_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
KUBE\_ADMISSION\_CONTROL="--admission\_control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
KUBE\_API\_ARGS="--tls-cert-file='/etc/ssl/kubernetes/apiserver.pem' --tls-private-key-file='/etc/ssl/kubernetes/apiserver-key.pem' --secure-port=443"
vi /etc/kubernetes/controller-manager
KUBE\_CONTROLLER\_MANAGER\_ARGS="--service-account-private-key-file=/etc/ssl/kubernetes/apiserver-key.pem --root-ca-file=/etc/ssl/kubernetes/ca.pem"
Start and enable etcd, kube-apiserver, kube-controller-manager and kube-scheduler:
for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done
etcdctl mk /coreos.com/network/config '{"Network":"10.2.0.0/16"}'
curl -H "Content-Type: application/json" -XPOST -d'{"apiVersion":"v1","kind":"Namespace","metadata":{"name":"kube-system"}}' "http://127.0.0.1:8080/api/v1/namespaces"
title: “Test Postfox SSL Certificate”
date: 2016-08-26T14:40:55
slug: test-postfox-ssl-certificate
openssl s\_client -starttls smtp -crlf -connect mx.test.net:465
With CA file:
openssl s\_client -CApath cert\_kubernetes\_new/ca.pem -starttls smtp -crlf -connect mx.test.net:465
title: “Relayhost with only one Relay”
date: 2016-08-25T19:39:32
slug: relayhost-with-only-one-relay
JUNE 21, 2008 BY WINGLOON·3 COMMENTS
This setup will help you to route all outgoing email through your ISP SMTP server using different port number and that SMTP server requires you to authenticate before relaying. For this scenario, the ISP SMTP server is Exim.
relayhost to point to your ISP SMTP server with port number as below: –relayhost = mail.example.com:2525
smtp_sasl_auth_enable = yes
smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
smtp_sasl_mechanism_filter = login
mail.example.com username@example.com:password
postmap /etc/postfix/sasl_passwd
service postfix restart
title: “Relayhost with authentication”
date: 2016-08-18T15:11:43
slug: relayhost-with-authentication
vi /etc/postfix/sender\_relay
support@domainvalley.de [smtprelaypool.ispgateway.de]
vi /etc/postfix/sasl\_passwd
@domainvalley.de support@domainvalley.de:PASSWORD
postmap /etc/postfix/sender\_relay
postmap /etc/postfix/sasl\_passwd
/etc/postfix/main.cf
smtp\_sender\_dependent\_authentication = yes
sender\_dependent\_relayhost\_maps = hash:/etc/postfix/sender\_relay
smtp\_sasl\_password\_maps=hash:/etc/postfix/sasl\_passwd
smtp\_sasl\_auth\_enable=yes
smtp\_sasl\_security\_options = noanonymous
title: “invoke-rc.d: policy-rc.d denied execution of start.”
date: 2016-04-22T09:09:23
slug: invoke-rc-d-policy-rc-d-denied-execution-of-start
invoke-rc.d: policy-rc.d denied execution of start.
This page gives a solution: edit contents of /usr/sbin/policy-rc.d, to replace exit 101 by exit 0.
title: “tar über ssh”
date: 2016-04-15T12:30:04
slug: tar-uber-ssh
Absolutes Verzeichnis
tar zcPf - /verzeichnis | ssh root@192.168.136.121 'tar zxpPf - '
title: “Persistent Storage”
date: 2016-03-27T15:41:45
slug: persistent-storage
Attach a Host Directory to the Container
Create a volume container:
docker create -v /dbdata --name dbstore ubuntu /bin/true
Start a container with the volume container attached
docker run -d --volumes-from dbstore --name ubuntu ubuntu
Start a second container with the same volume container (shared)
docker run -d --volumes-from dbstore --name ubuntu2 ubuntu
You can use multiple --volumes-from parameters
Make a backup from the volume
docker run --rm --volumes-from ubuntu -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /dbdata
Restore the backup into a container
docker run -v /dbdata --name dbstore2 ubuntu /bin/bash
docker run --rm --volumes-from dbstore2 -v $(pwd):/backup ubuntu bash -c "cd /dbdata && tar xvf /backup/backup.tar --strip 1"