title: “oc command under alpine linux”
date: 2021-04-11T20:49:42
slug: oc-command-under-alpine-linux
apk add make libc6-compat
title: “oc command under alpine linux”
date: 2021-04-11T20:49:42
slug: oc-command-under-alpine-linux
apk add make libc6-compat
title: “Update Column with value from another Table”
date: 2021-04-07T00:27:34
slug: update-column-with-value-from-another-table
UPDATE transponders INNER JOIN satellites ON transponders.name = satellites.name SET transponders.sat\_id = satellites.id
title: “Ubuntu enable disable Timesync”
date: 2021-03-31T22:04:00
slug: ubuntu-enable-disable-timesync
You can set your Ubuntu system to synchronize to the NIST atomic clock:
timedatectl set–ntp yes
If you need to turn off NTP synchronizing to be able to adjust the time and date manually, use:
timedatectl set-ntp no
title: “How to calculate Diseqc codes for motorised dish”
date: 2021-03-22T18:32:32
slug: how-to-calculate-diseqc-codes-for-motorised-dish
by Homer » Sun Jan 19, 2014 12:41 pm
Hi all,
I mentioned I did this in a different thread, and Oberon agreed that I should post it for other forum members.
If your question is “How do I get the Diseqc code to put into TVSource to send my motorised dish to a specific satellite angle ?” then read on.
Basics: Unless you live on the Greenwich 0 degree longitude line, a satellite like Hotbird 13E will not actually be 13E of south from you. To make this clearer, if your home longitude is 13 degrees east, then Hotbird will appear to be due south of you. Unless you live at 0 degrees longitude, you cannot simply send a command to the dish to go 13E of south !
There are a few steps to get the right angle to send to the motor :
1. Calculate the corrected satellite angle from your longitude and latitude.
2. Convert it to hex.
3. Write the full diseqc code to put into TVSource.
Full explanation:
With reference to the official Diseqc guide at Eutelsat.com “positioner.appli_notice.pdf”:
The final four bits “00 00” are for the angle to drive the dish to:
The first bit is “0” for WEST, or “E” for EAST.
Using this, you would now have (from my lat/long) the following for Astra 28.2E:
Sat is EAST, so first byte is “E”.
Original decimal uncorrected sat angle = 28.2E
Decimal Sat angle from GAAPS calculator (corrected for my lat/long) = 31.9E
Convert integer part to HEX. 31 (dec) becomes “1F” (hex) for the second and third byte.
Convert last fractional part of angle for the fourth byte. We have 0.9 to convert (31.9-31). This table again came from the Diseqc documentation:
User angle/ Hex value
0.0 0
0.1 2
0.2 3
0.3 5
0.4 6
0.5 8
0.6 A
0.7 B
0.8 D
0.9 E
So we have an “E” hex value for the 0.9 decimal degrees.
The full corrected command for Astra 28.2E from my lat/long is now E0 31 6E E1 FE – This is the code that you put into TVSource.
Please note that your southern most satellite may change from E/W or W/E ! I live at longitude 1.178W, and the corrected decimal angle for satellite 0.8W Thor is 0.4E ! This is very important, as you may be using your southern most satellite to align your dish !
Last edited by Homer on Sun Mar 08, 2015 11:37 am, edited 1 time in total.
title: “OSCAM”
date: 2021-03-19T19:12:08
slug: oscam
So genug erst mal mit Belehrungen und ab in die Configs um den CCcam Server aufzusetzen. Ich gehe hier davon aus das ihr bereits Oscam als Cardserver aufgesetzt habt und erfolgreich eure Karten einlesen konntet.
Also wenn Oscam bereits eingerichtet ist und eure Karten eingehängt sind, startet ihr erst mal den CCcam Server in der oscam.conf
oscam.conf
[cccam]
port = 12000
reshare = 1
forward_origin_card = 0
ignorereshare = 0
version = 2.1.4
minimizecards = 0
keepconnected = 1
stealth = 0
reshare_mode = 0
updateinterval = 240
:
“port” sollte klar sein.
“reshare = 1” bedeutet das eure Peers einmal weiter sharen dürfen. Würde hier eine “2” stehen dürften sie zwei mal weiter sharen. Default ist “0” kein reshare möglich.
“forward_origin_card = 0” mit dieser Option kann man das Verhalten von Oscam dahingehend verändern, das es genau gleich arbeitet wie ein original CCcamserver. Es deaktviert aber auch den Loadbalancer. Anfänger die mit der fülle an Optionen überfordert sind, sollten es am Anfang setzen (forward_origin_card = 0). Wer ein gut eingestellten Loadbalancer hat, sollte es deaktivieren.
“ignorereshare = 0” ist mit “0” abgeschaltet und mit “1” gesetzt. Wenn es aktiviert ist, wird die Reshare Angabe in der oscam.conf ignoriert und Oscam verwendet die Reshare Angabe die man in der oscam.user angegeben hat.
“version = 2.1.4” gibt die CCcam Version an, als die sich Oscam ausgibt.
“minimizecards = 0” damit kann man seine Cardliste verkleinern. Beim aktivieren werden die Karten die übermittelt werden zusammengefasst. Wenn man z.b. 20x “CAID 1702” im Share hat, wird daraus eine Karte. Hat man z.b. verschiedene Viaccess Karten, wird eine Karte mit CAID 0500 übermittelt, mit allen verfügbaren ProviderIDs.
“keepconnected = 1” aktiviert man damit die Clients dauerhaft verbunden bleiben. CCcam keepalive
“stealth = 0” hiermit kann man einstellen ob sich Oscam Server untereinander erkennen sollen und auf das extended OSCam-CCcam Protokoll umschalten. Ist es deaktiviert (stealth = 0) erkennen sich Oscamserver untereinander und nutzen ein eigenes CCcam Protokoll. Wenn es aktivert ist erkennen sie sich nicht mehr als Partner Oscam und nutzen das Standard CCcam Protokoll.
“reshare_mode = 0” damit kann man festlegen welche Karten an die Clients gesendet werden. “reshare_mode = 0” ist default und Standard, hier werden alle Karten wie bisher an die Clients gesendet. Auf “reshare_mode = 1” gestellt ist es genauso wie default, nur werden zusätzlich noch die Services eines Readers mit veröffentlicht. Wer also vollen CCcam_2.2.0 Support möchte mit god SIDs und bad SIDs braucht reshare_mode=1 mit sauber angelegten Reader-Services (positive und negative Services). Auf “reshare_mode = 2” ändert sich das ganze etwas, statt die vom CCcam-Reader empfangenen Kartendaten zu verwenden, werden nur noch die Reader-Services veröffentlicht. Mit “reshare_mode = 3” werden dann nur noch die User-Services veröffentlicht. Somit kann man die Karten auf die Dienste beschränken, die man eigentlich nur veröffentlichen möchte. Diese Option kann einem neugeborenen Oscam Nutzer verwirren. Also wenn ihr das erste mal ein Oscam-CCcam Server aufsetzt, benutzt erst mal “reshare_mode = 0” und befasst euch später mit den Unterschieden dieser Einstellmöglichkeiten.
“updateinterval = Sekunden” Zum festlegen in welchem Zeitintervall ein Update der Cardliste an die Clients gesendet wird. Default = 240.
Wenn man nun einen User anlegt, macht man das genauso wie man das beim einrichten des Cardserver gemacht hat. Da mußte man ja auch einen User für CCcam (Nline) anlegen. Es kommen nur ein paar zusätzliche Optionen hinzu in der oscam.user.
oscam.user
[account]
user = user1
pwd = passwort
group = 1
hostname = dyndns.com des Sharepartners,
cccmaxhops = 2
cccreshare = 1
Das Gegenstück wäre bei einem CCcam Clienten
C: Deinydyndns.com 12000 user1 passwort
“user” “pwd” erkläre ich hier nicht gehört zu den Grundkenntnissen die man haben sollte.
“group” Man sollte am besten jeden Reader in eine eigene Group setzen (Lokal Reader). Die ganzen externen CCcam Proxy Server hab ich in einer Group zusammengefasst. Also Lokal Reader haben alle eine extra Group. CCcam Proxys sind alle in der selben Group. Nun kann ich in der oscam.user mit “group = XX” die Groups angeben auf die der User Zugriff hat. Wenn es mehrere Gruppen sind, dann werden sie durch Komma getrennt aufgeführt. Hat man also z.b. eine 1702 Sky Karte und eine 0d05 ORF Karte und ein User soll nur auf ORF zugreifen dürfen, nicht aber auf Sky, trage ich nur den Namen der ORF Group bei dem User ein.
“hostname = dyndns.com” Hier kann man die dns oder IP des User eintragen der diesen Account nutzt. Dient der Sicherheit für den Server/Account.
“cccmaxhops = 2” hier legt man fest wie viel hops der User bekommen soll. Also ob er z.b. auch hop2 Karten vom Server bekommen darf, wenn die überhaupt noch weitergegeben werden können (je nach reshare Einstellung des Karten Besitzers).
“cccreshare = 1” damit kann ich wieder einstellen wie oft der User die Karten weitergeben darf. Diese Option wird nur beachtet wenn in der oscam.conf “ignorereshare” auf “1” steht. Ansonsten gelten die globalen reshare rules aus der oscam.conf.
Nun noch das einhängen eines CCcam Proxys (externe CCcam Server, Cline). Gemacht wird das in der oscam.server und ist genau gleich wie einen Reader eintragen:
oscam.server
[reader]
label = server1
enable = 1
protocol = cccam
device = dyndns.com,12000
account = user
password = passwd
reconnecttimeout = 30
group = 1
cccversion = 2.1.4
cccmaxhops = 2
cccmindown = 1
cccwantemu = 0
ccckeepalive = 1
Das Gegenstück wäre bei einem CCcam Server
F: user passwd
:
“label = server1” bei label legt man einen Namen nach Wahl fest. Am sinnvollsten ist der User Name des Server Besitzers.
“enable = 1” auf “1” ist er aktiviert , auf “0” deaktiviert.
“protocol = cccam” sollte klar sein das hier cccam stehen muß.
“device = dyndns.com,12000” hier kommt ip/dns und mit Komma getrennt den Port des Servers rein.
“account = user” Login Name am externen Server.
“password = passwd” Passwort am externen Server um sich einzuloggen.
“reconnecttimeout = 30” Wann Oscam reconnecten soll, falls der Server nicht mehr antwortet.
“group = 1” In welche Group der Reader soll. Wie in der oscam.user schon beschrieben habe ich alle externen CCcam Proxys (Server) in ein und der selben Group. In dem Beispiel heißt die Group “1”.
“cccversion = 2.1.4” Als welche Version sich Oscam ausgibt, beim connect auf dem CCcam Proxys.
“cccmaxhops = 2” Wie viel hops man vom CCcam Proxy Server rein läßt.
“cccwantemu = 0” damit wird der Zugriff auf den Emu des CCcam Servers verhindert beziehungsweise erlaubt.
“cccmindown = 1” Filtert alle Karten raus die weniger als 1 hop reshare haben. Würde es auf 2 stehen werden alle Karten die weniger als 2 hops reshare haben rausgefilter. Filtert immer alle Karte mit weniger reshare als der eingetragenen Wert dieser Option raus.
“ccckeepalive = 1” Wenn man auf “1” stellt, bleibt der Server dauerhaft verbunden. Stellt man auf “0” disconnectet der Server wenn man gerade keine seiner Karten braucht. Sobald eine Anfrage an seine Karten kommt, wird die Verbindung wieder aufgebaut und bei Inaktivität wieder getrennt.
title: “Disec LNB”
date: 2021-03-13T19:33:04
slug: 1630-2
Between Ku and C Band the LOF you must setup are different. LOF stands for Local Oscillator Frequency and is given in MHz or GHz.
A satellite signal has a high frequency when they arrive on Earth and the LNB of your satellite antenna. In the LNB, the signals experience amplification, filtering and conversion of the received frequency to a lower one. Because if you would transmit the signals with the high frequency over a normal coaxial cable, they would be useless after a few meters (2 to 3 meters).
To avoid this problem, a separate and lower frequency (LOF) is generated in the LNB, which will send over the coaxial cable.
Depending on the LNB and received frequency range, the frequency at the output of the LNB can be calculated differently:
In Ku band:
Working frequency for the receiver = carrier frequency of the satellite transponder – LOF
In the C band:
Working frequency for the receiver = LOF – carrier frequency of the satellite transponder
The basic frequency for Ku Band LNBs are:
LOF Low (For Lowband) = 9750 MHz
LOF High (For Highband) = 10600 MHz
LOF Switch: 11700 MHz
The basic frequency for C Band LNBs are:
LOF Low (For Lowband) = 5150 MHz
(LOF High (For Highband) = 5150 MHz
LOF Switch: 5150 MHz
(A C-Band LNB has no High and Low Band and also no Switch between those both bands, so enter overall the Low Band frequency)
The most programs and software have settings for C-Band and Ku-Band. If you must enter the LOF directly, you can use the values above. For Max Series cards use the V/L and H/L inputs together with the settings above in your software. If your C-Band satellite use a circular (L-R) polarization instead of linear (V-H), then use for:
R = V/L
L = H/L
UNIVERSAL LNB
static char \*univ\_desc[] = {
"Europe",
"10800 to 11800 MHz and 11600 to 12700 Mhz",
"Dual LO, loband 9750, hiband 10600 MHz",
(char \*)NULL
};
Technical specifications
Commited Switch
Low band input frequency range 10.7 GHz ~ 11.7 GHz
Low band output frequency range 950 MHz ~ 1950 MHz
Low band LO frequency
9.75 GHz
High band input frequency range
11.7 GHz ~ 12.75 GHz
High band output frequency range 1100 ~ 2150 MHz
High band LO frequency
10.6 GHz
Noise figure
0.2 dB typ. (0.7 dB Max.)
LO temperature drift ± 3.0 MHz max.
LO initial accuracy ± 1.0 MHz max.
LO phase noise @ 10 kHz
-90 dBc/Hz
Conversion gain
60 dB min.
Gain ripple (over 26 MHz bandwidth) ± 0.75 dB
Gain variation (over full band) ± 4 dB max.
Image rejection
40 dB min.
1 dB compression point (@ output) 0.0 dBm min.
Cross talk
22 dB min.
Control signal Ca (V)
11.0 V ~ 14.0 V
Control signal Cb (H)
16.0 V ~ 20.0 V
Control signal Cc (band switching)
22 kHz ± 4 kHz
Output VSWR
2.0 : 1
In band spurious level -60 dBm max.
Current consumption
200 mA max. (11 VDC ~ 20 VDC)
Operating temperature
-30 °C ~ +60 °C
Output impedance
75 Ω (F-type)
Output connector type
F-Type (female)
Weight 344 g
title: “Selenium check iframe”
date: 2021-03-12T10:25:11
slug: selenium-check-iframe
from selenium import webdriver
driver = webdriver.Chrome(‘./chromedriver’)
driver.get(‘http://elegalix.allahabadhighcourt.in’)
driver.set_page_load_timeout(20)
driver.maximize_window()
driver.switch_to.frame(driver.find_element_by_name(‘sidebarmenu’))
driver.find_element_by_xpath(“//input[@value=’Advanced’]”).click()
driver.switch_to.default_content()
title: “Changing Root Password (Mariadb)”
date: 2021-03-10T20:42:44
slug: changing-root-password-mariadb
UPDATE mysql.user SET authentication\_string = PASSWORD('new\_password') WHERE user = 'root';
UPDATE mysql.user SET plugin = 'mysql\_native\_password' WHERE user = 'root';
FLUSH PRIVILEGES;
title: “Openshift CheatSheet”
date: 2021-03-10T10:00:18
slug: openshift-cheatsheet
* To create ssh secret:
“oc create secret generic sshsecret \
–from-file=ssh-privatekey=$HOME/.ssh/id\_rsa
“
* To create SSH-based authentication secret with .gitconfig file:
“oc create secret generic sshsecret \
–from-file=ssh-privatekey=$HOME/.ssh/id\_rsa \
–from-file=.gitconfig=
“
* To create secret that combines .gitconfig file and CA certificate:
“oc create secret generic sshsecret \
–from-file=ca.crt= \
–from-file=.gitconfig=
“
* To create basic authentication secret with CA certificate file:
“oc create secret generic \
–from-literal=username= \
–from-literal=password= \
–from-file=ca.crt=
“
* To create basic authentication secret with .gitconfig file and CA certificate file:
“oc create secret generic \
–from-literal=username= \
–from-literal=password= \
–from-file=.gitconfig= \
–from-file=ca.crt=
“
“$ oc describe AppliedClusterResourceQuota
“
“`
RUN set -x && \ 2
yum clean all && \
REPOLIST=rhel-7-server-rpms,rhel-7-server-optional-rpms,rhel-7-server-thirdparty-oracle-java-rpms \
INSTALL_PKGS=”tar java-1.8.0-oracle-devel” && \
yum -y update-minimal –disablerepo “*” –enablerepo ${REPOLIST} –setopt=tsflags=nodocs \
–security –sec-severity=Important –sec-severity=Critical && \
yum -y install –disablerepo “*” –enablerepo ${REPOLIST} –setopt=tsflags=nodocs ${INSTALL_PKGS} && \
yum clean all
“`
“`
01. oc extract -n default secrets/registry-certificates –keys=registry.crt
02. REGISTRY=$(oc get routes -n default docker-registry -o jsonpath='{.spec.host}’)
03. mkdir -p /etc/containers/certs.d/${REGISTRY}
04. mv registry.crt /etc/containers/certs.d/${REGISTRY}/
oc adm policy add-cluster-role-to-user system:image-builder system:serviceaccount:openshift-pipeline:pipeline
docker login ${REGISTRY} -u unused -p ${SA_PASSWORD}
docker push ${REGISTRY}/openshift-pipeline/helloworld
oc new-project demo-project
“oc create service externalname myservice \
–external-name myhost.example.com
“
A typical service creates endpoint resources dynamically, based on the selector attribute of the service. The oc status and oc get all commands do not display these resources. You can use the oc get endpoints command to display them.
If you use the oc create service externalname –external-name command to create a service, the command also creates an endpoint resource that points to the host name or IP address given as argument.
If you do not use the –external-name option, it does not create an endpoint resource. In this case, you need to use the oc create -f command and a resource definition file to explicitly create the endpoint resources.
If you create an endpoint from a file, you can define multiple IP addresses for the same external service, and rely on the OpenShift service load-balancing features. In this scenario, OpenShift does not add or remove addresses to account for the availability of each instance. An external application needs to update the list of IP addresses in the endpoint resource.
* this example removes an config attribute using JSON path
“oc patch dc/mysql –type=json \
-p='[{“op”:”remove”, “path”: “/spec/strategy/rollingParams”}]’
“
* this example cnhage an existing attribute value using JSON format
“oc patch dc/mysql –patch \
‘{“spec”:{“strategy”:{“type”:”Recreate”}}}’
“
The oc export command can create a resource definition file by using the –as-template option. Without the –as-template option, the oc export command only generates a list of resources. With the –as-template option, the oc export command wraps the list inside a template resource definition. After you export a set of resources to a template file, you can add annotations and parameters as desired.
The order in which you list the resources in the oc export command is important. You need to export dependent resources first, and then the resources that depend on them. For example, you need to export image streams before the build configurations and deployment configurations that reference those image streams.
“oc export is,bc,dc,svc,route –as-template > mytemplate.yml
“
Depending on your needs, add more resource types to the previous command. For example, add secret before bc and dc. It is safe to add pvc to the end of the list of resource types because a deployment waits for persistent volume claim to bind.
The oc export command does not generate resource definitions that are ready to use in a template. These resource definitions contain runtime information that is not needed in a template, and some of it could prevent the template from working at all. Examples of runtime information are attributes such as status, creationTimeStamp, image, and tags, besides most annotations that start with the openshift.io/generated-by prefix.
Some resource types, such as secrets, require special handling. It is not possible to initialize key values inside the data attribute using template parameters. The data attribute from a secret resource needs to be replaced by the stringData attribute and all key values need to be unencoded.
* https://access.redhat.com/articles/3136551
“`
oc process openshift//datagrid72-basic | oc create -f –
oc new-build –name=customdg -i openshift/jboss-datagrid72-openshift:1.0 –binary=true –to=’customdg:1.0’
oc set triggers dc/datagrid-app –from-image=openshift/jboss-datagrid72-openshift:1.0 –remove
oc set triggers dc/datagrid-app –from-image=customdg:1.0 -c datagrid-app
“`
“oc process -f mytemplate.yaml –parameters
“
“docker run registry.access.redhat.com/jboss-datagrid-7/datagrid72-openshift:1.0 /bin/sh -c ‘cat /opt/datagrid/standalone/configuration/clustered-openshift.xml’ > clustered-openshift.xml
“
“oc patch storageclass glusterfs-storage -p ‘{“metadata”: {“annotations”:{“storageclass.kubernetes.io/is-default-class”:”true”}}}’
“
“oc annotate route –overwrite haproxy.router.openshift.io/timeout=10s
“
“oc patch dc|rc -p “spec:
template:
spec:
nodeSelector:
region: infra”
“
“oc new-build –binary=true –name=ola2 –image-stream=redhat-openjdk18-openshift –to=’mycustom-jdk8:1.0’
oc start-build ola2 –from-file=./target/ola.jar –follow
oc new-app
“
“oc rollout pause dc
oc rollout resume dc
“
“http://$(oc get route nexus3 –template='{{ .spec.host }}’)
“
Maven uses settings.xml in $HOME/.m2 for configuration outside of pom.xml:
“`xml
xml version=”1.0″?
Nexus
Nexus Public Mirror
http://nexus-opentlc-shared.cloudapps.na.openshift.opentlc.com/content/groups/public/
*
nexus
admin
admin123
“`
Maven can automatically store artifacts using -DaltDeploymentRepository parameter for deploy task:
“mvn deploy -DskipTests=true \
-DaltDeploymentRepository= nexus::default::http://nexus3.nexus.svc.cluster.local:8081/repository/releases
“
“`
oc project
oc get is
oc import-image –from=docker.io// –all –confirm
oc get istag
OC_EDITOR=”vim” oc edit dc/
spec:
containers:
– image: docker.io/openshiftdemos/gogs@sha256:
imagePullPolicy: Always
“`
“oc secrets new-basicauth gogs-basicauth –username= –password=
oc set build-secret –source bc/tasks gogs-basicauth
“
“oc set volume dc/myAppDC –add –overwrite –name….
“
“oc create configmap myconfigfile –from-file=./configfile.txt
oc set volumes dc/printenv –add –overwrite=true –name=config-volume –mount-path=/data -t configmap –configmap-name=myconfigfile
“
“oc create secret generic mysec –from-literal=app\_user=superuser –from-literal=app\_password=topsecret
oc env dc/printenv –from=secret/mysec
oc set volume dc/printenv –add –name=db-config-volume –mount-path=/dbconfig –secret-name=printenv-db-secret
“
“oc set probe dc cotd1 –liveness — echo ok
oc set probe dc/cotd1 –readiness –get-url=http://:8080/index.php –initial-delay-seconds=2
“
“oc run pi –image=perl –replicas=1 –restart=OnFailure \
–command — perl -Mbignum=bpi -wle ‘print bpi(2000)’
“
“oc run pi –image=perl –schedule=’\*/1 \* \* \* \*’ \
–restart=OnFailure –labels parent=”cronjobpi” \
–command — perl -Mbignum=bpi -wle ‘print bpi(2000)’
“
“oc expose service cotd1 –name=’abcotd’ -l name=’cotd’
oc set route-backends abcotd –adjust cotd2=+20%
oc set route-backends abcotd cotd1=50 cotd2=50
“
“docker pull registry.access.redhat.com/jboss-eap-6/eap64-openshift
“
“oc create –dry-run –validate -f openshift/template/tomcat6-docker-buildconfig.yaml
“
* to prune old objects
* https://docs.openshift.com/container-platform/3.3/admin_guide/pruning_resources.html
* to enable cluster GC
* https://docs.openshift.com/container-platform/3.3/admin_guide/garbage_collection.html
“oc whoami -t
“
“`
curl -k -H “Authorization: Bearer ” https://:8443/api/v1/namespaces//pods/https::8778/proxy/jolokia/
curl -k -H “Authorization: Bearer ” https://:8443/api/v1/namespaces//pods/https::8778/proxy/jolokia//read/java.lang:type\=Memory/HeapMemoryUsage | jq .
“`
oc“`
oc login –username=tuelho –insecure-skip-tls-verify –server=https://master00-${guid}.oslab.opentlc.com:8443
oc login -u system:admin -n openshift
“`
“oc describe clusterPolicy default
“
“`
oadm policy add-role-to-user
oadm policy add-cluster-role-to-user
“`
“oadm policy add-scc-to-user anyuid -z default
“
for more details consult: https://docs.openshift.com/enterprise/3.1/admin_guide/manage_authorization_policy.html
“ip=oc describe pod hello-openshift|grep IP:|awk ‘{print $2}’curl http://${ip}:8080
“
“oc exec -tioc get pods | awk ‘/registry/ { print $1; }’` /bin/bash
oc rsh
“`
“`
oc edit /
oc edit dc/myDeploymentConfig
“`
PersistentVolumeClaim to a DeploymentConfig“oc volume dc/docker-registry \
–add –overwrite \
-t persistentVolumeClaim \
–claim-name=registry-claim \
–name=registry-storage
“
“oc new-app –docker-image=openshift/hello-openshift:v1.0.6 -l “todelete=yes”
“
eap64-basic-s2i): Ticketmonster demo“oc new-app javaee6-demo
oc new-app –template=eap64-basic-s2i -p=APPLICATION\_NAME=ticketmonster,SOURCE\_REPOSITORY\_URL=https://github.com/jboss-developer/ticket-monster,SOURCE\_REPOSITORY\_REF=2.7.0.Final,CONTEXT\_DIR=demo
“
“oc new-app https://github.com/openshift/sinatra-example -l “todelete=yes”
oc new-app openshift/php~https://github.com/openshift/sti-php -l “todelete=yes”
“
“oc get builds
oc logs -f builds/sti-php-1
“
“$ oc new-app
“
“$ oc new-app https://github.com/openshift/sti-ruby.git \
–context-dir=2.0/test/puma-test-app
“
“$ oc new-app https://github.com/openshift/ruby-hello-world.git#beta4
“
New App From Source Code
Build Strategy Detection
If new-app finds a Dockerfile in the repository, it uses docker build strategy Otherwise, new-app uses source strategy
To specify strategy, set
–strategy flagto source or docker
Example: To force new-app to use docker strategy for local source repository:
“$ oc new-app /home/user/code/myapp –strategy=docker
“
oc new-app command based on S2I support“$ oc new-app https://github.com/openshift/simple-openshift-sinatra-sti.git -o json | \
tee ~/simple-sinatra.json
“
“$ oc new-app mysql
“
“$ oc new-app myregistry:5000/example/myimage
“
If the registry that the image comes from is not secured with SSL, cluster administrators must ensure that the Docker daemon on the OpenShift Enterprise nodes is run with the –insecure-registry flag pointing to that registry. You must also use the
–insecure-registry=trueflag to tell new-app that the image comes from an insecure registry.
“$ oc create -f examples/sample-app/application-template-stibuild.json
$ oc new-app ruby-helloworld-sample
“
“$ oc new-app openshift/postgresql-92-centos7 \
-e POSTGRESQL\_USER=user \
-e POSTGRESQL\_DATABASE=db \
-e POSTGRESQL\_PASSWORD=password
“
“$ oc new-app https://github.com/openshift/ruby-hello-world -o json > myapp.json
$ vi myapp.json
$ oc create -f myapp.json
“
* To deploy two images in single pod:
“$ oc new-app nginx+mysql
“
“$ oc new-app \
ruby~https://github.com/openshift/ruby-hello-world \
mysql \
–group=ruby+mysql
“
“$ oc export all –as-template=
“
You can also substitute a particular resource type or multiple resources instead of all. Run $ oc export -h for more examples
* to create a new project using oadm and defining an admin user
“$ oadm new-project instant-app –display-name=”instant app example project” \
–description=’A demonstration of an instant-app/template’ \
–node-selector=’region=primary’ –admin=andrew
“
oc CLI based on a template“$ oc new-app –template=mysql-ephemeral –param=MYSQL\_USER=mysqluser,MYSQL\_PASSWORD=redhat,MYSQL\_DATABASE=mydb,DATABASE\_SERVICE\_NAME=database
“
env vars defined in a DeploymentConfig object“`
$ oc env dc database –list
MYSQL_USER=***
MYSQL_PASSWORD=***
MYSQL_DATABASE=***
“`
The first adds, with value /data. The second updates, with value /opt.
“$ oc env dc/registry STORAGE=/data
$ oc env dc/registry –overwrite STORAGE=/opt
“
To unset environment variables in the pod templates:
“$ oc env KEY\_1- … KEY\_N- []
“
The trailing hyphen (-, U+2D) is required.
This example removes environment variables ENV1 and ENV2 from deployment config d1:
“$ oc env dc/d1 ENV1- ENV2-
“
This removes environment variable ENV from all replication controllers:
“$ oc env rc –all ENV-
“
This removes environment variable ENV from container c1 for replication controller r1:
To list environment variables in pods or pod templates:
“$ oc env rc r1 –containers=’c1′ ENV-
“
This example lists all environment variables for pod p1:
“$ oc env –list []
“
“$ oc env pod/p1 –list
“
“oc patch dc/ \
-p ‘{“spec”:{“template”:{“spec”:{“nodeSelector”:{“nodeLabel”:”logging-es-node-1″}}}}}’
“
“oc volume dc/ \
–add –overwrite –name= \
–type=persistentVolumeClaim –claim-name=
“
“oadm manage node –schedulable=false
“
“oadm registry –service-account=registry \
–config=/etc/origin/master/admin.kubeconfig \
–credentials=/etc/origin/master/openshift-registry.kubeconfig \
–images=’registry.access.redhat.com/openshift3/ose-${component}:${version}’ \
–mount-host= –selector=meuselector
“
“oc export all –as-template=
“
“`
cat ./path/to/your/Dockerfile | oc new-build –name=build-from-docker –binary –strategy=docker -l app=app-from-custom-docker-build -D –
oc start-build build-from-docker –from-dir=. –follow
oc new-app app-from-custom-docker-build -l app=app-from-custom-docker-build
oc expose service app-from-custom-docker-build
“`
“`
oc rsync /home/user/source devpod1234:/src
oc rsync devpod1234:/src /home/user/source
“`
* internal DNS name of ose/kubernetes services
* follows the pattern ..svc.cluster.local
Object Type | Example
————— | ———————————————-
Default | .cluster.local
Services | ..svc.cluster.local
Endpoints | ..endpoints.cluster.local
“he only caveat to this, is that if we are using the multi-tenant OVS networking plugin, our cluster administrators will have to make visible our ci project to all other projects:” Ref: https://blog.openshift.com/improving-build-time-java-builds-openshift/
“$ oadm pod-network make-projects-global ci
“
To adjust openshift-master log level, edit following line of /etc/sysconfig/atomic-openshift-master from master VM:
“OPTIONS=–loglevel=4
“
To make changes valid, restart atomic-openshift-master service:
“$ sudo -i systemctl restart atomic-openshift-master.service
“
In node machine, to provide filtered information:
“`
“`
Make sure that your default service account has sufficient privileges to communicate with the Kubernetes REST API.
Add the view role to serviceaccount for the project:
“$ oc policy add-role-to-user view system:serviceaccount:$(oc project -q):default
“
Examine the first entry in the log file:
“Service account has sufficient permissions to view pods in kubernetes (HTTP 200). Clustering will be available.
“
“oc adm ipfailover ipf-ha-router
–replicas=2 –watch-port=80 \
–selector=”region=infra” \
–virtual-ips=”x.0.0.x” \
–iptables-chain=”INPUT” \
–service-account=ipfailover –create
“
* Common strategy for building template definitions:
* Use oc new-app and oc expose to manually create resources application needs
* Test to make sure resources work as expected
* Use oc export with -o json option to export existing resource definitions
* Merge resource definitions into template definition file
* Add parameters
* Test resource definition in another project
JSON syntax errors are not easy to identify, and OpenShift is sensitive to them, refusing JSON files that most browsers would accept as valid. Use jsonlint -s from the python-demjson package, available from EPEL, to identify syntax issues in a JSON resource definition file.
* Use oc new-app with -o json option to bootstrap your new template definition file
“oc new-app -o json openshift/hello-openshift > hello.json
“
* Converting the Resource Definition to a Template
* Change kind attribute from List to Template
* Make two changes to metadata object:
* Add name attribute and value so template has name users can refer to
* Add annotations containing description attribute for template, so users know what template is supposed to do.
* Rename items array attribute as objects
* to list all parameters from mysql-persistent template:
“$ oc process –parameters=true -n openshift mysql-persistent
“
* Customizing resources from a preexisting Template
Example:
“`
$ oc export -o json
-n openshift mysql-ephemeral > mysql-ephemeral.json
… change the mysql-ephemeral.json file …
$ oc process -f mysql-ephemeral.json \
-v MYSQL_DATABASE=testdb,MYSQL_USE=testuser,MYSQL_PASSWORD=
testdb.json
$ oc create -f testdb.json
“`oc process uses the -v option to provide parameter values, while oc new-app command uses the -p option.
“ssh master00-$guid
mkdir /root/pvs
“
“export volsize=”5Gi”
for volume in pv{1..25}; \
do \
cat << EOF > /root/pvs/${volume}.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: ${volume}
spec:
capacity:
storage: ${volsize}
accessModes:
– ReadWriteOnce
nfs:
path: /var/export/pvs/${volume}
server: 192.168.0.254
persistentVolumeReclaimPolicy: Recycle
EOF
echo “Created def file for ${volume}”; \
done
“
“export volsize=”10Gi”
for volume in pv{26..50}; \
do \
cat << EOF > /root/pvs/${volume}.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: ${volume}
spec:
capacity:
storage: ${volsize}
accessModes:
– ReadWriteOnce
nfs:
path: /var/export/pvs/${volume}
server: 192.168.0.254
persistentVolumeReclaimPolicy: Recycle
EOF
echo “Created def file for ${volume}”; \
done
“
“export volsize=”1Gi”
for volume in pv{51..100}; \
do \
cat << EOF > /root/pvs/${volume}.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: ${volume}
spec:
capacity:
storage: ${volsize}
accessModes:
– ReadWriteOnce
nfs:
path: /var/export/pvs/${volume}
server: 192.168.0.254
persistentVolumeReclaimPolicy: Recycle
EOF
echo “Created def file for ${volume}”; \
done
“
“for pv in $(oc get pv|awk ‘{print $1}’ | grep pv | grep -v NAME); do oc patch pv $pv -p “spec:
accessModes:
– ReadWriteMany
– ReadWriteOnce
– ReadOnlyMany
persistentVolumeReclaimPolicy: Recycle”
“
“oc patch -n user1 dc/events -p ‘{ “metadata” : { “annotations” : { “app.openshift.io/connects-to” : “invoice-events,inventory-events” } }, “spec”: { “template”: { “spec”: { “containers”: [ { “name”: “events”, “env”: [ { “name”: “AMQP\_HOST”, “valueFrom”: { “configMapKeyRef”: { “name”: “amq-config”, “key”: “service.host” } } }, { “name”: “AMQP\_PORT”, “valueFrom”: { “configMapKeyRef”: { “name”: “amq-config”, “key”: “service.port.amqp” } } } ] } ] } } } }’
“
title: “Renew Certificate manually”
date: 2021-03-04T17:35:22
slug: renew-certificate-manually
oc adm ca create-signer-cert --key=/root/metrics-signer.key --cert=/root/ca.crt --serial=/root/ca.serial.txt --name=metrics-signer@$(date +%s)
oc adm ca create-server-cert --signer-cert=/root/ca.crt --signer-key=/root/metrics-signer.key --hostnames=metrics-server,metrics-server.openshift-metrics-server.svc,metrics-server.openshift-metrics-server.svc.cluster.local --cert=/root/tls.crt --key=/root/tls.key --signer-serial=/etc/origin/master/ca.serial.txt
oc create secret generic metrics-server-certs --from-file=tls.crt,tls.key,ca.crt -o json --dry-run | oc replace -f -
Content of api.yaml (caBundle is Output of 'cat ca.crt | base64 -w0' ):
apiVersion: apiregistration.k8s.io/v1beta1
kind: APIService
metadata:
name: v1beta1.metrics.k8s.io
labels:
kubernetes.io/cluster-service: "true"
metrics-server-infra: support
spec:
service:
name: metrics-server
namespace: openshift-metrics-server
group: metrics.k8s.io
version: v1beta1
insecureSkipTLSVerify: false
groupPriorityMinimum: 100
versionPriority: 100
caBundle: LS0tLS1CRUdJTiBDRVJUSUZJQ0sdFpYUnkKYVdOekxYTnBaMjVsY2tBeE5qRTBPRFV6TWpFNE1CNFhEVEl4TURNd05ERXdNakF4T0ZvWERUSTJNRE13TXpFdwpNakF4T1Zvd0pERWlNQ0FHQTFVRUF3d1piV1YwY21samN5MXphV2R1WlhKQU1UWXhORGcxTXpJeE9EQ0NBU0l3CkRRWUpLb1pJaHZjTkFRRUJCUUFEZ2dFUEFEQ0NBUW9DZ2dFQkFLbkdVSGtmZ3F3VTQvb1VzMSs4QWhqMS8rVWoKWE9Wd3kwK1oyazkwakpKVmIwMmo3TXhrcjEvUWRXREkva0NNcExKM1d5Sm9WbWJUOWNEWGxoVkV4VGRkOVd3RwpQbnIvMVMwVXB4aHJmWmdHNFgydVJnYzFEb0hxOWdzbCt3TTcrRTRPaDlaZ0NxZkhOU0ozQWF0UlFZUThiaGtVCnovMG9HT1FUUFV1aDJ3YVU1ZjZqcjJSN1VsS21Ua3RuVVBBV080SWpBRS9PSDBSZmNOS1V6V01XK1d5bVZXOGUKNExiUGc1TmpIT1YzTGVGQ0hkMTR1R3ArV3dYVm9GY3ZoaUViMHM4ZkRFTTBGVnJtYUxYOWJUTlNkYXV2cGNubgpqVXBJK1MzZ2VHRE5xdEk1U2lGQjkralg4eE5ma1gyUzZyOHZ6b0kwaHpUNTdrb2NndDhZb0tWcVFEc0NBd0VBCkFhTWpNQ0V3RGdZRFZSMFBBUUgvQkFRREFnS2tNQThHQTFVZEV3RUIvd1FGTUFNQkFmOHdEUVlKS29aSWh2Y04KQVFFTEJRQURnZ0VCQUcwYmhvancwNVFwam1oakMwSStBTU9pNVZMRWFlb1RIZks0eU91SVJRUkllQUtYWjUzbQpDOFo2clA1N3NUTmgvOFVoWk9rRzdndlNlTExwaGtaRlVtLzNWM3J1cFFCY2ZaNGZmZ3VGaEQ1bmludjRkR25SCkZ3TzREVm9mN0RROWhPUVlIMVh0bGpzUTBCTnpjNS9jOXBlTWc4eWtmekpCWm50dlZTcTN3TFA2Q29scWkzdGoKNWV5N012RkF6bjBxNHZVQm1MQ25BQmU5SWtFUFdibWt6Sytza2hpb3lCMUdoNGFEeml3ei9ZbkFvZjArVGVFdAp1V1l1K2xBR3U1MmJHSWN1emgyOURFaVg4YWEzQlBkU0p2N1RUczVNMnNsSVd0S3ByL0VHM1hiS1Z3SW8yUVdtCngzSUJUaE1NQUJ5Tll5UWQ3WngvYVEwQmg1a3FMdEV3K2lRPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
oc replace -f api.yaml