Changing URL/hostname in place requires generating new certs, updating the configMap and a few client entries in keycloak, and finally HUP pods. Prior to doing anything, let’s make a backup.
Assume we’re moving the master DNS & wildcard DNS from:
aen.dev.anaconda.com, *.ae.dev.anaconda.com to aenew.dev.anaconda.com, *.aenew.dev.anaconda.com
Note: The formatting for the configMap is preserved if it is copied from ops-center, port 32009.
It is a good idea to copy and paste this to a file before DNS is switched. The ops-center should still be accessible via IP address but since certs have been updated, this may not work as easily.
Assume the DNS has been switched over to the new name.
Easiest would be to have one terminal where you’re logged into planet container (gravity enter) and a second terminal where you’re an outside planet but logged in as root.
Backup current configuration
Ensure all user sessions & deployed apps are stopped before proceeding.
Please run the following commands to backup current configMap, ingress, secrets, and keycloak DB.
gravity enter
export ORIGNAME=ae.dev.anaconda.com
export NEWNAME=aenew.dev.anaconda.com
mkdir -p /opt/anaconda/swapname/$ORIGNAME
mkdir -p /opt/anaconda/swapname/$NEWNAME
# create backups of current DNS
# get all configmaps in default namespace, secrets + ingress configs
cd /opt/anaconda/swapname/$ORIGNAME
kubectl get cm -o yaml --export > configmap.yaml
kubectl get secret anaconda-enterprise-certs -o yaml --export > certs.yaml
kubectl get ingress -o yaml --export > ingress.yaml
# stop postgres DB; then create backup of /opt/anaconda/storage/pgdata
kubectl scale deploy --replicas=0 anaconda-enterprise-postgres
# verify postgres is stopped before backingup:
# kubectl get deploy anaconda-enterprise-postgres
tar czfP pgdata.tgz /opt/anaconda/storage/pgdata
# now restart postgres and all other pods
kubectl scale deploy --replicas=1 anaconda-enterprise-postgres
kubectl delete po --all -n default
This should be all we need to do a backup and revert back to this name if needed.
Note: At this point, the storage, repository, deploy pods and other pods will be in crashloopback / error state; that is expected since the DNS has changed.
Create and apply new temporary certs
Make sure java’s keytool is installed.
# if ./out does not exist, please create it
export ORIGNAME=ae.dev.anaconda.com
export NEWNAME=aenew.dev.anaconda.com
export INSTALLERDIR=/home/centos/anaconda-enterprise-5.3.0-25.gebdd8f474
cd $INSTALLERDIR/DIY-SSL-CA
# create ./out if it does not already exist
mkdir ./out
# clean any old temporary certs; create new ones
./clean.sh
# the create_ca.sh may show a curl error because env is airgapped; ignore it
./create_ca.sh
./create_int.sh
# use anaconda for all the passwords
# no need to add support for external CAs
./create_crt.sh $NEWNAME
# following will need sudo or done as root
cp ./out/$NEWNAME/secret.yml /opt/anaconda/swapname/$NEWNAME/.
sed -i -e 's/certs/anaconda-enterprise-certs/g' /opt/anaconda/swapname/$NEWNAME/secret.yml
# verify the name of the secret is updated and looks as follows:
grep name /opt/anaconda/swapname/$NEWNAME/secret.yml
name: anaconda-enterprise-certs
Update the secrets inside the planet container.
# ensure NEWNAME variable is set in an environment
kubectl replace -f /opt/anaconda/swapname/$NEWNAME/secret.yml
Pods will be restarted after updating other k8s resources like config maps & ingress controller configs.
Update config maps & ingress config
Update the exported configmap to $NEWNAME; search and replace the old name with a new name.
# ensure variables are set
# copy the exported configMap to new location; we will manipulate this one
cp /opt/anaconda/swapname/$ORIGNAME/configmap.yaml /opt/anaconda/swapname/$NEWNAME/.
cp /opt/anaconda/swapname/$ORIGNAME/ingress.yaml /opt/anaconda/swapname/$NEWNAME/.
cd /opt/anaconda/swapname/$NEWNAME
# configmap: verify sed command replaced references to $ORIGNAME with $NEWNAME
# if everything looks good, let's run the sed comamnd to inplace replace
sed -e "s/$ORIGNAME/$NEWNAME/g" configmap.yaml |grep $ORIGNAME
sed -e "s/$ORIGNAME/$NEWNAME/g" configmap.yaml |grep $NEWNAME |less
sed -i -e "s/$ORIGNAME/$NEWNAME/g" configmap.yaml
# replace configmap in k8s
kubectl replace -f ./configmap.yaml
# this will replace all config maps
# configmap/anaconda-enterprise-anaconda-platform.yml replaced
# configmap/anaconda-enterprise-install replaced
# configmap/anaconda-enterprise-nginx-config replaced
# configmap/docs-nginx-config replaced
# ingress: verify sed command replaced references to $ORIGNAME with $NEWNAME
# if everything looks good, let's run the sed comamnd to inplace replace
sed -e "s/$ORIGNAME/$NEWNAME/g" ingress.yaml |grep $ORIGNAME
sed -e "s/$ORIGNAME/$NEWNAME/g" ingress.yaml |grep $NEWNAME
sed -i -e "s/$ORIGNAME/$NEWNAME/g" ingress.yaml
# replace configmap in k8s
kubectl replace -f ./ingress.yaml
Restart all pods
Restart and ensure they all come up
kubectl delete po --all -n default
watch 'kubectl get pods | grep -v Running'
Update keycloak clients (if needed)
This should not be needed since the clients should not be referencing $ORIGNAME but we should verify with log in to keycloak, then look at the client's page, the anaconda-platform client specifically.
Clean up Postgres DB
Projects that were created under the old DNS entries fail to start after updating the hostname. To fix this, enter the Postgres DB pod and update the repo_url for the projects. Update oldURL.dev.anaconda.com to the newURL.dev.anaconda.com in the command below before running it.
# Connect to postgres:
kubectl exec -it anaconda-enterprise-postgres-podname /bin/bash
# use psql to connect to DB and update project entries
:/# psql -U postgres
psql (9.6.xx)
Type "help" for help.
postgres=# \c anaconda_storage
You are now connected to database "anaconda_storage" as user "postgres".
# Update repo_url column with new DNS name for all rows
anaconda_storage=# UPDATE projects SET repo_url = REPLACE(repo_url,'oldURL.dev.anaconda.com','newURL.dev.anaconda.com');
UPDATE 4
# ^D to exit