openstack-helm/doc/source/install/prepare_kubernetes.rst
Vladimir Kozhukalov 1a885ddd1f Update deployment documentation
Recently we updated our test jobs so that all of them
use the `deploy-env` Ansible role which utilizes the
Kubeadm to deploy the test Kubernetes cluster.

The role works for both multi-node and single-node
environments. Although the deployment of Kubernetes itself
is out of scope of Openstack-Helm, we recommen using this
role to deploy test and development Kubernetes clusters.
So at the moment there is no need to provide
different sets of tools single-node and multi-node test envs.
Now this is a matter of the Ansible inventory file.

Also the deployment procedure of OpenStack on top of Kubernetes
using Helm is the same for multi-node and single-node clusters
because it only relies on the Kubernetes API.

We will be improving the `deploy-env` role even futher and
we will be cleaning up the deployment scripts and the documentation
so to provide a clear experience for the Openstack-Helm users.

Change-Id: I70236c4a2b870b52d2b01f65b1ef9b9518646964
2023-10-23 19:10:30 -05:00

1.1 KiB

Prepare Kubernetes

In this section we assume you have a working Kubernetes cluster and Kubectl and Helm properly configured to interact with the cluster.

Before deploying OpenStack components using OpenStack-Helm you have to set labels on Kubernetes worker nodes which are used as node selectors.

Also necessary namespaces must be created.

You can use the prepare-k8s.sh script as an example of how to prepare the Kubernetes cluster for OpenStack deployment. The script is assumed to be run from the openstack-helm repository

cd ~/osh/openstack-helm
./tools/deployment/common/prepare-k8s.sh

Note

Pay attention that in the above script we set labels on all Kubernetes nodes including Kubernetes control plane nodes which are usually not aimed to run workload pods (OpenStack in our case). So you have to either untaint control plane nodes or modify the prepare-k8s.sh script so it sets labels only on the worker nodes.