![Michał Dulko](/assets/img/avatar_default.png)
Keystone supports (and that's a default setting since Ocata) using non-persistent fernet tokens instead of UUID tokens written into the DB. This setting is in some cases better in terms of performance and manageability (no more tokens DB table cleanups). OpenStack-Helm should be able to support it. General issue with fernet tokens is that keys used to encrypt them need to be persistent and shared accross the cluster. Moreover "rotate" operation generates a new key, so key repository will change over time. This commit implements fernet tokens support by: * A 'keystone-fernet-keys' secret is created to serve as keys repository. * New fernet-setup Job will populate secret with initial keys. * New fernet-rotate CronJob will be run periodically (weekly by default) and perform key rotation operation and update the secret. * Secret is attached to keystone-api pods in /etc/keystone/fernet-tokens directory. Turns out k8s is updating secrets attached to pods automatically, so because of Keystone's fernet tokens implementation, we don't need to worry about synchronization of the key repository. Everything should be fine unless fernet-rotate job will run before all of the pods will notice the change in the secret. As in real-world scenario you would rotate your keys no more often than once an hour, this should be totally fine. Implements: blueprint keystone-fernet-tokens Change-Id: Ifc84b8c97e1a85d30eb46260582d9c58220fbf0a
Kubeadm AIO Container
This container builds a small AIO Kubeadm based Kubernetes deployment for Development and Gating use.
Instructions
OS Specific Host setup:
Ubuntu:
From a freshly provisioned Ubuntu 16.04 LTS host run:
sudo apt-get update -y
sudo apt-get install -y \
\
docker.io \
nfs-common \
git make
OS Independent Host setup:
You should install the kubectl
and helm
binaries:
KUBE_VERSION=v1.6.7
HELM_VERSION=v2.5.1
TMP_DIR=$(mktemp -d)
curl -sSL https://storage.googleapis.com/kubernetes-release/release/${KUBE_VERSION}/bin/linux/amd64/kubectl -o ${TMP_DIR}/kubectl
chmod +x ${TMP_DIR}/kubectl
sudo mv ${TMP_DIR}/kubectl /usr/local/bin/kubectl
curl -sSL https://storage.googleapis.com/kubernetes-helm/helm-${HELM_VERSION}-linux-amd64.tar.gz | tar -zxv --strip-components=1 -C ${TMP_DIR}
sudo mv ${TMP_DIR}/helm /usr/local/bin/helm
rm -rf ${TMP_DIR}
And clone the OpenStack-Helm repo:
git clone https://git.openstack.org/openstack/openstack-helm
Build the AIO environment (optional)
A known good image is published to dockerhub on a fairly regular basis, but if you wish to build your own image, from the root directory of the OpenStack-Helm repo run:
export KUBEADM_IMAGE=openstackhelm/kubeadm-aio:v1.6.7
sudo docker build --pull -t ${KUBEADM_IMAGE} tools/kubeadm-aio
Deploy the AIO environment
To launch the environment run:
export KUBEADM_IMAGE=openstackhelm/kubeadm-aio:v1.6.7
export KUBE_VERSION=v1.6.7
./tools/kubeadm-aio/kubeadm-aio-launcher.sh
export KUBECONFIG=${HOME}/.kubeadm-aio/admin.conf
Once this has run without errors, you should hopefully have a Kubernetes single node environment running, with Helm, Calico, appropriate RBAC rules and node labels to get developing.
If you wish to use this environment as the primary Kubernetes environment on your host you may run the following, but note that this will wipe any previous client configuration you may have.
mkdir -p ${HOME}/.kube
cat ${HOME}/.kubeadm-aio/admin.conf > ${HOME}/.kube/config
If you wish to create dummy network devices for Neutron to manage there is a helper script that can set them up for you:
sudo docker exec kubelet /usr/bin/openstack-helm-aio-network-prep
Logs
You can get the logs from your kubeadm-aio
container by
running:
sudo docker logs -f kubeadm-aio