diff --git a/doc/source/install/before_deployment.rst b/doc/source/install/before_deployment.rst deleted file mode 100644 index 8f5874db7f..0000000000 --- a/doc/source/install/before_deployment.rst +++ /dev/null @@ -1,41 +0,0 @@ -Before deployment -================= - -Before proceeding with the steps outlined in the following -sections and executing the actions detailed therein, it is -imperative that you clone the essential Git repositories -containing all the required Helm charts, deployment scripts, -and Ansible roles. This preliminary step will ensure that -you have access to the necessary assets for a seamless -deployment process. - -.. code-block:: bash - - mkdir ~/osh - cd ~/osh - git clone https://opendev.org/openstack/openstack-helm.git - git clone https://opendev.org/openstack/openstack-helm-infra.git - - -All further steps assume these two repositories are cloned into the -`~/osh` directory. - -Next, you need to update the dependencies for all the charts in both OpenStack-Helm -repositories. This can be done by running the following commands: - -.. code-block:: bash - - cd ~/osh/openstack-helm - ./tools/deployment/common/prepare-charts.sh - -Also before deploying the OpenStack cluster you have to specify the -OpenStack and the operating system version that you would like to use -for deployment. For doing this export the following environment variables - -.. code-block:: bash - - export OPENSTACK_RELEASE=2024.1 - export FEATURES="${OPENSTACK_RELEASE} ubuntu_jammy" - -.. note:: - The list of supported versions can be found :doc:`here `. diff --git a/doc/source/install/before_starting.rst b/doc/source/install/before_starting.rst new file mode 100644 index 0000000000..241abe4634 --- /dev/null +++ b/doc/source/install/before_starting.rst @@ -0,0 +1,21 @@ +Before starting +=============== + +The OpenStack-Helm charts are published in the `openstack-helm`_ and +`openstack-helm-infra`_ helm repositories. Let's enable them: + +.. code-block:: bash + + helm repo add openstack-helm https://tarballs.opendev.org/openstack/openstack-helm + helm repo add openstack-helm-infra https://tarballs.opendev.org/openstack/openstack-helm-infra + +The OpenStack-Helm `plugin`_ provides some helper commands used later on. +So, let's install it: + +.. code-block:: bash + + helm plugin install https://opendev.org/openstack/openstack-helm-plugin + +.. _openstack-helm: https://tarballs.opendev.org/openstack/openstack-helm +.. _openstack-helm-infra: https://tarballs.opendev.org/openstack/openstack-helm-infra +.. _plugin: https://opendev.org/openstack/openstack-helm-plugin.git diff --git a/doc/source/install/deploy_ceph.rst b/doc/source/install/deploy_ceph.rst deleted file mode 100644 index 38709f53b6..0000000000 --- a/doc/source/install/deploy_ceph.rst +++ /dev/null @@ -1,52 +0,0 @@ -Deploy Ceph -=========== - -Ceph is a highly scalable and fault-tolerant distributed storage -system designed to store vast amounts of data across a cluster of -commodity hardware. It offers object storage, block storage, and -file storage capabilities, making it a versatile solution for -various storage needs. Ceph's architecture is based on a distributed -object store, where data is divided into objects, each with its -unique identifier, and distributed across multiple storage nodes. -It uses a CRUSH algorithm to ensure data resilience and efficient -data placement, even as the cluster scales. Ceph is widely used -in cloud computing environments and provides a cost-effective and -flexible storage solution for organizations managing large volumes of data. - -Kubernetes introduced the CSI standard to allow storage providers -like Ceph to implement their drivers as plugins. Kubernetes can -use the CSI driver for Ceph to provision and manage volumes -directly. By means of CSI stateful applications deployed on top -of Kubernetes can use Ceph to store their data. - -At the same time, Ceph provides the RBD API, which applications -can utilize to create and mount block devices distributed across -the Ceph cluster. The OpenStack Cinder service utilizes this Ceph -capability to offer persistent block devices to virtual machines -managed by the OpenStack Nova. - -The recommended way to deploy Ceph on top of Kubernetes is by means -of `Rook`_ operator. Rook provides Helm charts to deploy the operator -itself which extends the Kubernetes API adding CRDs that enable -managing Ceph clusters via Kuberntes custom objects. For details please -refer to the `Rook`_ documentation. - -To deploy the Rook Ceph operator and a Ceph cluster you can use the script -`ceph-rook.sh`_. Then to generate the client secrets to interface with the Ceph -RBD API use this script `ceph-adapter-rook.sh`_ - -.. code-block:: bash - - cd ~/osh/openstack-helm-infra - ./tools/deployment/ceph/ceph-rook.sh - ./tools/deployment/ceph/ceph-adapter-rook.sh - -.. note:: - Please keep in mind that these are the deployment scripts that we - use for testing. For example we place Ceph OSD data object on loop devices - which are slow and are not recommended to use in production. - - -.. _Rook: https://rook.io/ -.. _ceph-rook.sh: https://opendev.org/openstack/openstack-helm-infra/src/branch/master/tools/deployment/ceph/ceph-rook.sh -.. _ceph-adapter-rook.sh: https://opendev.org/openstack/openstack-helm-infra/src/branch/master/tools/deployment/ceph/ceph-adapter-rook.sh diff --git a/doc/source/install/deploy_ingress_controller.rst b/doc/source/install/deploy_ingress_controller.rst deleted file mode 100644 index 3e3da11fea..0000000000 --- a/doc/source/install/deploy_ingress_controller.rst +++ /dev/null @@ -1,51 +0,0 @@ -Deploy ingress controller -========================= - -Deploying an ingress controller when deploying OpenStack on Kubernetes -is essential to ensure proper external access and SSL termination -for your OpenStack services. - -In the OpenStack-Helm project, we usually deploy multiple `ingress-nginx`_ -controller instances to optimize traffic routing: - -* In the `kube-system` namespace, we deploy an ingress controller that - monitors ingress objects across all namespaces, primarily focusing on - routing external traffic into the OpenStack environment. - -* In the `openstack` namespace, we deploy an ingress controller that - handles traffic exclusively within the OpenStack namespace. This instance - plays a crucial role in SSL termination for enhanced security between - OpenStack services. - -* In the `ceph` namespace, we deploy an ingress controller that is dedicated - to routing traffic specifically to the Ceph Rados Gateway service, ensuring - efficient communication with Ceph storage resources. - -You can utilize any other ingress controller implementation that suits your -needs best. See for example the list of available `ingress controllers`_. -Ensure that the ingress controller pods are deployed with the `app: ingress-api` -label which is used by the OpenStack-Helm as a selector for the Kubernetes -services that are exposed as OpenStack endpoints. - -For example, the OpenStack-Helm `keystone` chart by default deploys a service -that routes traffic to the ingress controller pods selected using the -`app: ingress-api` label. Then it also deploys an ingress object that references -the **IngressClass** named `nginx`. This ingress object corresponds to the HTTP -virtual host routing the traffic to the Keystone API service which works as an -endpoint for Keystone pods. - -.. image:: deploy_ingress_controller.jpg - :width: 100% - :align: center - :alt: deploy-ingress-controller - -To deploy these three ingress controller instances you can use the script `ingress.sh`_ - -.. code-block:: bash - - cd ~/osh/openstack-helm - ./tools/deployment/common/ingress.sh - -.. _ingress.sh: https://opendev.org/openstack/openstack-helm/src/branch/master/tools/deployment/common/ingress.sh -.. _ingress-nginx: https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/README.md -.. _ingress controllers: https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/ diff --git a/doc/source/install/deploy_kubernetes.rst b/doc/source/install/deploy_kubernetes.rst deleted file mode 100644 index 2132d3461c..0000000000 --- a/doc/source/install/deploy_kubernetes.rst +++ /dev/null @@ -1,143 +0,0 @@ -Deploy Kubernetes -================= - -OpenStack-Helm provides charts that can be deployed on any Kubernetes cluster if it meets -the supported version requirements. However, deploying the Kubernetes cluster itself is beyond -the scope of OpenStack-Helm. - -You can use any Kubernetes deployment tool for this purpose. In this guide, we detail how to set up -a Kubernetes cluster using Kubeadm and Ansible. While not production-ready, this cluster is ideal -as a starting point for lab or proof-of-concept environments. - -All OpenStack projects test their code through an infrastructure managed by the CI -tool, Zuul, which executes Ansible playbooks on one or more test nodes. Therefore, we employ Ansible -roles/playbooks to install required packages, deploy Kubernetes, and then execute tests on it. - -To establish a test environment, the Ansible role deploy-env_ is employed. This role establishes -a basic single/multi-node Kubernetes cluster, ensuring the functionality of commonly used -deployment configurations. The role is compatible with Ubuntu Focal and Ubuntu Jammy distributions. - -Install Ansible ---------------- - -.. code-block:: bash - - pip install ansible - -Prepare Ansible roles ---------------------- - -Here is the Ansible `playbook`_ that is used to deploy Kubernetes. The roles used in this playbook -are defined in different repositories. So in addition to OpenStack-Helm repositories -that we assume have already been cloned to the `~/osh` directory you have to clone -yet another one - -.. code-block:: bash - - cd ~/osh - git clone https://opendev.org/zuul/zuul-jobs.git - -Now let's set the environment variable ``ANSIBLE_ROLES_PATH`` which specifies -where Ansible will lookup roles - -.. code-block:: bash - - export ANSIBLE_ROLES_PATH=~/osh/openstack-helm-infra/roles:~/osh/zuul-jobs/roles - -To avoid setting it every time when you start a new terminal instance you can define this -in the Ansible configuration file. Please see the Ansible documentation. - -Prepare Ansible inventory -------------------------- - -We assume you have three nodes, usually VMs. Those nodes must be available via -SSH using the public key authentication and a ssh user (let say `ubuntu`) -must have passwordless sudo on the nodes. - -Create the Ansible inventory file using the following command - -.. code-block:: bash - - cat > ~/osh/inventory.yaml <`. - -.. _rabbitmq.sh: https://opendev.org/openstack/openstack-helm/src/branch/master/tools/deployment/component/common/rabbitmq.sh -.. _mariadb.sh: https://opendev.org/openstack/openstack-helm/src/branch/master/tools/deployment/component/common/mariadb.sh -.. _memcached.sh: https://opendev.org/openstack/openstack-helm/src/branch/master/tools/deployment/component/common/memcached.sh -.. _openstack-helm-infra: https://opendev.org/openstack/openstack-helm-infra.git diff --git a/doc/source/install/index.rst b/doc/source/install/index.rst index d684e0e0d3..7f2bc66839 100644 --- a/doc/source/install/index.rst +++ b/doc/source/install/index.rst @@ -1,17 +1,12 @@ Installation ============ -Contents: +Here are sections that describe how to install OpenStack using OpenStack-Helm: .. toctree:: :maxdepth: 2 - before_deployment - deploy_kubernetes - prepare_kubernetes - deploy_ceph - setup_openstack_client - deploy_ingress_controller - deploy_openstack_backend - deploy_openstack - + before_starting + kubernetes + prerequisites + openstack diff --git a/doc/source/install/deploy_ingress_controller.jpg b/doc/source/install/ingress.jpg similarity index 100% rename from doc/source/install/deploy_ingress_controller.jpg rename to doc/source/install/ingress.jpg diff --git a/doc/source/install/kubernetes.rst b/doc/source/install/kubernetes.rst new file mode 100644 index 0000000000..8869e0b464 --- /dev/null +++ b/doc/source/install/kubernetes.rst @@ -0,0 +1,182 @@ +Kubernetes +========== + +OpenStack-Helm provides charts that can be deployed on any Kubernetes cluster if it meets +the version :doc:`requirements `. However, deploying the Kubernetes cluster itself is beyond +the scope of OpenStack-Helm. + +You can use any Kubernetes deployment tool for this purpose. In this guide, we detail how to set up +a Kubernetes cluster using Kubeadm and Ansible. While not production-ready, this cluster is ideal +as a starting point for lab or proof-of-concept environments. + +All OpenStack projects test their code through an infrastructure managed by the CI +tool, Zuul, which executes Ansible playbooks on one or more test nodes. Therefore, we employ Ansible +roles/playbooks to install required packages, deploy Kubernetes, and then execute tests on it. + +To establish a test environment, the Ansible role `deploy-env`_ is employed. This role deploys +a basic single/multi-node Kubernetes cluster, used to prove the functionality of commonly used +deployment configurations. The role is compatible with Ubuntu Focal and Ubuntu Jammy distributions. + +.. note:: + The role `deploy-env`_ is not idempotent and assumed to be applied to a clean environment. + +Clone roles git repositories +---------------------------- + +Before proceeding with the steps outlined in the following sections, it is +imperative that you clone the git repositories containing the required Ansible roles. + +.. code-block:: bash + + mkdir ~/osh + cd ~/osh + git clone https://opendev.org/openstack/openstack-helm-infra.git + git clone https://opendev.org/zuul/zuul-jobs.git + +Install Ansible +--------------- + +.. code-block:: bash + + pip install ansible + +Set roles lookup path +--------------------- + +Now let's set the environment variable ``ANSIBLE_ROLES_PATH`` which specifies +where Ansible will lookup roles + +.. code-block:: bash + + export ANSIBLE_ROLES_PATH=~/osh/openstack-helm-infra/roles:~/osh/zuul-jobs/roles + +To avoid setting it every time when you start a new terminal instance you can define this +in the Ansible configuration file. Please see the Ansible documentation. + +Prepare inventory +----------------- + +The example below assumes that there are four nodes which must be available via +SSH using the public key authentication and a ssh user (let say ``ubuntu``) +must have passwordless sudo on the nodes. + +.. code-block:: bash + + cat > ~/osh/inventory.yaml < ~/osh/deploy-env.yaml < + tee ${OVERRIDES_DIR}/neutron/values_overrides/neutron_simple.yaml << EOF + conf: + neutron: + DEFAULT: + l3_ha: False + max_l3_agents_per_router: 1 + # will be attached to the br-ex bridge. + # The IP assigned to the interface will be moved to the bridge. + auto_bridge_add: + br-ex: ${PROVIDER_INTERFACE} + plugins: + ml2_conf: + ml2_type_flat: + flat_networks: public + openvswitch_agent: + ovs: + bridge_mappings: public:br-ex + EOF + + helm upgrade --install neutron openstack-helm/neutron \ + --namespace=openstack \ + $(helm osh get-values-overrides -p ${OVERRIDES_DIR} -c neutron neutron_simple ${FEATURES}) + + helm osh wait-for-pods openstack + +Horizon +~~~~~~~ + +OpenStack Horizon is the web application that is intended to provide a graphic +user interface to Openstack services. + +Let's deploy it: + +.. code-block:: bash + + helm upgrade --install neutron openstack-helm/neutron \ + --namespace=openstack \ + $(helm osh get-values-overrides -p ${OVERRIDES_DIR} -c horizon ${FEATURES}) + + helm osh wait-for-pods openstack + +OpenStack client +---------------- + +Installing the OpenStack client on the developer's machine is a vital step. +The easiest way to install the OpenStack client is to create a Python +virtual environment and install the client using ``pip``. + +.. code-block:: bash + + python3 -m venv ~/openstack-client + source ~/openstack-client/bin/activate + pip install python-openstackclient + +Now let's prepare the OpenStack client configuration file: + +.. code-block:: bash + + mkdir -p ~/.config/openstack + tee ~/.config/openstack/clouds.yaml << EOF + clouds: + openstack_helm: + region_name: RegionOne + identity_api_version: 3 + auth: + username: 'admin' + password: 'password' + project_name: 'admin' + project_domain_name: 'default' + user_domain_name: 'default' + auth_url: 'http://keystone.openstack.svc.cluster.local/v3' + +That is it! Now you can use the OpenStack client. Try to run this: + +.. code-block:: bash + + openstack --os-cloud openstack_helm endpoint list + +.. note:: + + In some cases it is more convenient to use the OpenStack client + inside a Docker container. OpenStack-Helm provides the + `openstackhelm/openstack-client`_ image. The below is an example + of how to use it. + + +.. code-block:: bash + + docker run -it --rm --network host \ + -v ~/.config/openstack/clouds.yaml:/etc/openstack/clouds.yaml \ + -e OS_CLOUD=openstack_helm \ + docker.io/openstackhelm/openstack-client:${OPENSTACK_RELEASE} \ + openstack endpoint list + +Remember that the container file system is ephemeral and is destroyed +when you stop the container. So if you would like to use the +Openstack client capabilities interfacing with the file system then you have to mount +a directory from the host file system where necessary files are located. +For example, this is useful when you create a key pair and save the private key in a file +which is then used for ssh access to VMs. Or it could be Heat templates +which you prepare in advance and then use with Openstack client. + +For convenience, you can create an executable entry point that runs the +Openstack client in a Docker container. See for example `setup-client.sh`_. + +.. _setup-client.sh: https://opendev.org/openstack/openstack-helm/src/branch/master/tools/deployment/common/setup-client.sh +.. _openstackhelm/openstack-client: https://hub.docker.com/r/openstackhelm/openstack-client/tags?page=&page_size=&ordering=&name= diff --git a/doc/source/install/prepare_kubernetes.rst b/doc/source/install/prepare_kubernetes.rst deleted file mode 100644 index 2b5a920a7c..0000000000 --- a/doc/source/install/prepare_kubernetes.rst +++ /dev/null @@ -1,28 +0,0 @@ -Prepare Kubernetes -================== - -In this section we assume you have a working Kubernetes cluster and -Kubectl and Helm properly configured to interact with the cluster. - -Before deploying OpenStack components using OpenStack-Helm you have to set -labels on Kubernetes worker nodes which are used as node selectors. - -Also necessary namespaces must be created. - -You can use the `prepare-k8s.sh`_ script as an example of how to prepare -the Kubernetes cluster for OpenStack deployment. The script is assumed to be run -from the openstack-helm repository - -.. code-block:: bash - - cd ~/osh/openstack-helm - ./tools/deployment/common/prepare-k8s.sh - - -.. note:: - Pay attention that in the above script we set labels on all Kubernetes nodes including - Kubernetes control plane nodes which are usually not aimed to run workload pods - (OpenStack in our case). So you have to either untaint control plane nodes or modify the - `prepare-k8s.sh`_ script so it sets labels only on the worker nodes. - -.. _prepare-k8s.sh: https://opendev.org/openstack/openstack-helm/src/branch/master/tools/deployment/common/prepare-k8s.sh diff --git a/doc/source/install/prerequisites.rst b/doc/source/install/prerequisites.rst new file mode 100644 index 0000000000..785a70b411 --- /dev/null +++ b/doc/source/install/prerequisites.rst @@ -0,0 +1,328 @@ +Kubernetes prerequisites +======================== + +Ingress controller +------------------ + +Ingress controller when deploying OpenStack on Kubernetes +is essential to ensure proper external access for the OpenStack services. + +We recommend using the `ingress-nginx`_ because it is simple and provides +all necessary features. It utilizes Nginx as a reverse proxy backend. +Here is how to deploy it. + +First, let's create a namespace for the OpenStack workloads. The ingress +controller must be deployed in the same namespace because OpenStack-Helm charts +create service resources pointing to the ingress controller pods which +in turn redirect traffic to particular Openstack API pods. + +.. code-block:: bash + + tee > /tmp/openstack_namespace.yaml < /tmp/metallb_system_namespace.yaml < /tmp/metallb_ipaddresspool.yaml < /tmp/metallb_l2advertisement.yaml < /tmp/openstack_endpoint_service.yaml < /etc/resolv.conf + +or alternatively the ``resolvectl`` command can be used to configure the systemd-resolved. + +.. note:: + In production environments you probably choose to use a different DNS + domain for public Openstack endpoints. This is easy to achieve by setting + the necessary chart values. All Openstack-Helm charts values have the + ``endpoints`` section where you can specify the ``host_fqdn_override``. + In this case a chart will create additional ``Ingress`` resources to + handle the external domain name and also the Keystone endpoint catalog + will be updated. + +Here is an example of how to set the ``host_fqdn_override`` for the Keystone chart: + +.. code-block:: yaml + + endpoints: + identity: + host_fqdn_override: + public: "keystone.example.com" + +Ceph +---- + +Ceph is a highly scalable and fault-tolerant distributed storage +system. It offers object storage, block storage, and +file storage capabilities, making it a versatile solution for +various storage needs. + +Kubernetes CSI (Container Storage Interface) allows storage providers +like Ceph to implement their drivers, so that Kubernetes can +use the CSI driver to provision and manage volumes which can be +used by stateful applications deployed on top of Kubernetes +to store their data. In the context of OpenStack running in Kubernetes, +the Ceph is used as a storage backend for services like MariaDB, RabbitMQ and +other services that require persistent storage. By default OpenStack-Helm +stateful sets expect to find a storage class named **general**. + +At the same time, Ceph provides the RBD API, which applications +can utilize directly to create and mount block devices distributed across +the Ceph cluster. For example the OpenStack Cinder utilizes this Ceph +capability to offer persistent block devices to virtual machines +managed by the OpenStack Nova. + +The recommended way to manage Ceph on top of Kubernetes is by means +of the `Rook`_ operator. The Rook project provides the Helm chart +to deploy the Rook operator which extends the Kubernetes API +adding CRDs that enable managing Ceph clusters via Kuberntes custom objects. +There is also another Helm chart that facilitates deploying Ceph clusters +using Rook custom resources. + +For details please refer to the `Rook`_ documentation and the `charts`_. + +.. note:: + The following script `ceph-rook.sh`_ (recommended for testing only) can be used as + an example of how to deploy the Rook Ceph operator and a Ceph cluster using the + Rook `charts`_. Please note that the script places Ceph OSDs on loopback devices + which is **not recommended** for production. The loopback devices must exist before + using this script. + +Once the Ceph cluster is deployed, the next step is to enable using it +for services depoyed by OpenStack-Helm charts. The ``ceph-adapter-rook`` chart +provides the necessary functionality to do this. The chart will +prepare Kubernetes secret resources containing Ceph client keys/configs +that are later used to interface with the Ceph cluster. + +Here we assume the Ceph cluster is deployed in the ``ceph`` namespace. + +The procedure consists of two steps: 1) gather necessary entities from the Ceph cluster +2) copy them to the ``openstack`` namespace: + +.. code-block:: bash + + tee > /tmp/ceph-adapter-rook-ceph.yaml < /tmp/ceph-adapter-rook-openstack.yaml < - -Remember that the container file system is ephemeral and is destroyed -when you stop the container. So if you would like to use the -Openstack client capabilities interfacing with the file system then you have to mount -a directory from the host file system where you will read/write necessary files. -For example, this is useful when you create a key pair and save the private key in a file -which is then used for ssh access to VMs. Or it could be Heat recipes -which you prepare in advance and then use with Openstack client. - -.. _setup-client.sh: https://opendev.org/openstack/openstack-helm/src/branch/master/tools/deployment/common/setup-client.sh -.. _MetalLB: https://metallb.universe.tf