Merge "Update docs: not require cloning git repos"
This commit is contained in:
commit
c0c4c9a231
@ -1,41 +0,0 @@
|
||||
Before deployment
|
||||
=================
|
||||
|
||||
Before proceeding with the steps outlined in the following
|
||||
sections and executing the actions detailed therein, it is
|
||||
imperative that you clone the essential Git repositories
|
||||
containing all the required Helm charts, deployment scripts,
|
||||
and Ansible roles. This preliminary step will ensure that
|
||||
you have access to the necessary assets for a seamless
|
||||
deployment process.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
mkdir ~/osh
|
||||
cd ~/osh
|
||||
git clone https://opendev.org/openstack/openstack-helm.git
|
||||
git clone https://opendev.org/openstack/openstack-helm-infra.git
|
||||
|
||||
|
||||
All further steps assume these two repositories are cloned into the
|
||||
`~/osh` directory.
|
||||
|
||||
Next, you need to update the dependencies for all the charts in both OpenStack-Helm
|
||||
repositories. This can be done by running the following commands:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
cd ~/osh/openstack-helm
|
||||
./tools/deployment/common/prepare-charts.sh
|
||||
|
||||
Also before deploying the OpenStack cluster you have to specify the
|
||||
OpenStack and the operating system version that you would like to use
|
||||
for deployment. For doing this export the following environment variables
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
export OPENSTACK_RELEASE=2024.1
|
||||
export FEATURES="${OPENSTACK_RELEASE} ubuntu_jammy"
|
||||
|
||||
.. note::
|
||||
The list of supported versions can be found :doc:`here </readme>`.
|
21
doc/source/install/before_starting.rst
Normal file
21
doc/source/install/before_starting.rst
Normal file
@ -0,0 +1,21 @@
|
||||
Before starting
|
||||
===============
|
||||
|
||||
The OpenStack-Helm charts are published in the `openstack-helm`_ and
|
||||
`openstack-helm-infra`_ helm repositories. Let's enable them:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
helm repo add openstack-helm https://tarballs.opendev.org/openstack/openstack-helm
|
||||
helm repo add openstack-helm-infra https://tarballs.opendev.org/openstack/openstack-helm-infra
|
||||
|
||||
The OpenStack-Helm `plugin`_ provides some helper commands used later on.
|
||||
So, let's install it:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
helm plugin install https://opendev.org/openstack/openstack-helm-plugin
|
||||
|
||||
.. _openstack-helm: https://tarballs.opendev.org/openstack/openstack-helm
|
||||
.. _openstack-helm-infra: https://tarballs.opendev.org/openstack/openstack-helm-infra
|
||||
.. _plugin: https://opendev.org/openstack/openstack-helm-plugin.git
|
@ -1,52 +0,0 @@
|
||||
Deploy Ceph
|
||||
===========
|
||||
|
||||
Ceph is a highly scalable and fault-tolerant distributed storage
|
||||
system designed to store vast amounts of data across a cluster of
|
||||
commodity hardware. It offers object storage, block storage, and
|
||||
file storage capabilities, making it a versatile solution for
|
||||
various storage needs. Ceph's architecture is based on a distributed
|
||||
object store, where data is divided into objects, each with its
|
||||
unique identifier, and distributed across multiple storage nodes.
|
||||
It uses a CRUSH algorithm to ensure data resilience and efficient
|
||||
data placement, even as the cluster scales. Ceph is widely used
|
||||
in cloud computing environments and provides a cost-effective and
|
||||
flexible storage solution for organizations managing large volumes of data.
|
||||
|
||||
Kubernetes introduced the CSI standard to allow storage providers
|
||||
like Ceph to implement their drivers as plugins. Kubernetes can
|
||||
use the CSI driver for Ceph to provision and manage volumes
|
||||
directly. By means of CSI stateful applications deployed on top
|
||||
of Kubernetes can use Ceph to store their data.
|
||||
|
||||
At the same time, Ceph provides the RBD API, which applications
|
||||
can utilize to create and mount block devices distributed across
|
||||
the Ceph cluster. The OpenStack Cinder service utilizes this Ceph
|
||||
capability to offer persistent block devices to virtual machines
|
||||
managed by the OpenStack Nova.
|
||||
|
||||
The recommended way to deploy Ceph on top of Kubernetes is by means
|
||||
of `Rook`_ operator. Rook provides Helm charts to deploy the operator
|
||||
itself which extends the Kubernetes API adding CRDs that enable
|
||||
managing Ceph clusters via Kuberntes custom objects. For details please
|
||||
refer to the `Rook`_ documentation.
|
||||
|
||||
To deploy the Rook Ceph operator and a Ceph cluster you can use the script
|
||||
`ceph-rook.sh`_. Then to generate the client secrets to interface with the Ceph
|
||||
RBD API use this script `ceph-adapter-rook.sh`_
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
cd ~/osh/openstack-helm-infra
|
||||
./tools/deployment/ceph/ceph-rook.sh
|
||||
./tools/deployment/ceph/ceph-adapter-rook.sh
|
||||
|
||||
.. note::
|
||||
Please keep in mind that these are the deployment scripts that we
|
||||
use for testing. For example we place Ceph OSD data object on loop devices
|
||||
which are slow and are not recommended to use in production.
|
||||
|
||||
|
||||
.. _Rook: https://rook.io/
|
||||
.. _ceph-rook.sh: https://opendev.org/openstack/openstack-helm-infra/src/branch/master/tools/deployment/ceph/ceph-rook.sh
|
||||
.. _ceph-adapter-rook.sh: https://opendev.org/openstack/openstack-helm-infra/src/branch/master/tools/deployment/ceph/ceph-adapter-rook.sh
|
@ -1,51 +0,0 @@
|
||||
Deploy ingress controller
|
||||
=========================
|
||||
|
||||
Deploying an ingress controller when deploying OpenStack on Kubernetes
|
||||
is essential to ensure proper external access and SSL termination
|
||||
for your OpenStack services.
|
||||
|
||||
In the OpenStack-Helm project, we usually deploy multiple `ingress-nginx`_
|
||||
controller instances to optimize traffic routing:
|
||||
|
||||
* In the `kube-system` namespace, we deploy an ingress controller that
|
||||
monitors ingress objects across all namespaces, primarily focusing on
|
||||
routing external traffic into the OpenStack environment.
|
||||
|
||||
* In the `openstack` namespace, we deploy an ingress controller that
|
||||
handles traffic exclusively within the OpenStack namespace. This instance
|
||||
plays a crucial role in SSL termination for enhanced security between
|
||||
OpenStack services.
|
||||
|
||||
* In the `ceph` namespace, we deploy an ingress controller that is dedicated
|
||||
to routing traffic specifically to the Ceph Rados Gateway service, ensuring
|
||||
efficient communication with Ceph storage resources.
|
||||
|
||||
You can utilize any other ingress controller implementation that suits your
|
||||
needs best. See for example the list of available `ingress controllers`_.
|
||||
Ensure that the ingress controller pods are deployed with the `app: ingress-api`
|
||||
label which is used by the OpenStack-Helm as a selector for the Kubernetes
|
||||
services that are exposed as OpenStack endpoints.
|
||||
|
||||
For example, the OpenStack-Helm `keystone` chart by default deploys a service
|
||||
that routes traffic to the ingress controller pods selected using the
|
||||
`app: ingress-api` label. Then it also deploys an ingress object that references
|
||||
the **IngressClass** named `nginx`. This ingress object corresponds to the HTTP
|
||||
virtual host routing the traffic to the Keystone API service which works as an
|
||||
endpoint for Keystone pods.
|
||||
|
||||
.. image:: deploy_ingress_controller.jpg
|
||||
:width: 100%
|
||||
:align: center
|
||||
:alt: deploy-ingress-controller
|
||||
|
||||
To deploy these three ingress controller instances you can use the script `ingress.sh`_
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
cd ~/osh/openstack-helm
|
||||
./tools/deployment/common/ingress.sh
|
||||
|
||||
.. _ingress.sh: https://opendev.org/openstack/openstack-helm/src/branch/master/tools/deployment/common/ingress.sh
|
||||
.. _ingress-nginx: https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/README.md
|
||||
.. _ingress controllers: https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/
|
@ -1,143 +0,0 @@
|
||||
Deploy Kubernetes
|
||||
=================
|
||||
|
||||
OpenStack-Helm provides charts that can be deployed on any Kubernetes cluster if it meets
|
||||
the supported version requirements. However, deploying the Kubernetes cluster itself is beyond
|
||||
the scope of OpenStack-Helm.
|
||||
|
||||
You can use any Kubernetes deployment tool for this purpose. In this guide, we detail how to set up
|
||||
a Kubernetes cluster using Kubeadm and Ansible. While not production-ready, this cluster is ideal
|
||||
as a starting point for lab or proof-of-concept environments.
|
||||
|
||||
All OpenStack projects test their code through an infrastructure managed by the CI
|
||||
tool, Zuul, which executes Ansible playbooks on one or more test nodes. Therefore, we employ Ansible
|
||||
roles/playbooks to install required packages, deploy Kubernetes, and then execute tests on it.
|
||||
|
||||
To establish a test environment, the Ansible role deploy-env_ is employed. This role establishes
|
||||
a basic single/multi-node Kubernetes cluster, ensuring the functionality of commonly used
|
||||
deployment configurations. The role is compatible with Ubuntu Focal and Ubuntu Jammy distributions.
|
||||
|
||||
Install Ansible
|
||||
---------------
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
pip install ansible
|
||||
|
||||
Prepare Ansible roles
|
||||
---------------------
|
||||
|
||||
Here is the Ansible `playbook`_ that is used to deploy Kubernetes. The roles used in this playbook
|
||||
are defined in different repositories. So in addition to OpenStack-Helm repositories
|
||||
that we assume have already been cloned to the `~/osh` directory you have to clone
|
||||
yet another one
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
cd ~/osh
|
||||
git clone https://opendev.org/zuul/zuul-jobs.git
|
||||
|
||||
Now let's set the environment variable ``ANSIBLE_ROLES_PATH`` which specifies
|
||||
where Ansible will lookup roles
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
export ANSIBLE_ROLES_PATH=~/osh/openstack-helm-infra/roles:~/osh/zuul-jobs/roles
|
||||
|
||||
To avoid setting it every time when you start a new terminal instance you can define this
|
||||
in the Ansible configuration file. Please see the Ansible documentation.
|
||||
|
||||
Prepare Ansible inventory
|
||||
-------------------------
|
||||
|
||||
We assume you have three nodes, usually VMs. Those nodes must be available via
|
||||
SSH using the public key authentication and a ssh user (let say `ubuntu`)
|
||||
must have passwordless sudo on the nodes.
|
||||
|
||||
Create the Ansible inventory file using the following command
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
cat > ~/osh/inventory.yaml <<EOF
|
||||
all:
|
||||
vars:
|
||||
kubectl:
|
||||
user: ubuntu
|
||||
group: ubuntu
|
||||
calico_version: "v3.25"
|
||||
crictl_version: "v1.26.1"
|
||||
helm_version: "v3.6.3"
|
||||
kube_version: "1.26.3-00"
|
||||
yq_version: "v4.6.0"
|
||||
children:
|
||||
primary:
|
||||
hosts:
|
||||
primary:
|
||||
ansible_port: 22
|
||||
ansible_host: 10.10.10.10
|
||||
ansible_user: ubuntu
|
||||
ansible_ssh_private_key_file: ~/.ssh/id_rsa
|
||||
ansible_ssh_extra_args: -o StrictHostKeyChecking=no
|
||||
nodes:
|
||||
hosts:
|
||||
node-1:
|
||||
ansible_port: 22
|
||||
ansible_host: 10.10.10.11
|
||||
ansible_user: ubuntu
|
||||
ansible_ssh_private_key_file: ~/.ssh/id_rsa
|
||||
ansible_ssh_extra_args: -o StrictHostKeyChecking=no
|
||||
node-2:
|
||||
ansible_port: 22
|
||||
ansible_host: 10.10.10.12
|
||||
ansible_user: ubuntu
|
||||
ansible_ssh_private_key_file: ~/.ssh/id_rsa
|
||||
ansible_ssh_extra_args: -o StrictHostKeyChecking=no
|
||||
EOF
|
||||
|
||||
If you have just one node then it must be `primary` in the file above.
|
||||
|
||||
.. note::
|
||||
If you would like to set up a Kubernetes cluster on the local host,
|
||||
configure the Ansible inventory to designate the `primary` node as the local host.
|
||||
For further guidance, please refer to the Ansible documentation.
|
||||
|
||||
Deploy Kubernetes
|
||||
-----------------
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
cd ~/osh
|
||||
ansible-playbook -i inventory.yaml ~/osh/openstack-helm/tools/gate/playbooks/deploy-env.yaml
|
||||
|
||||
The playbook only changes the state of the nodes listed in the Ansible inventory.
|
||||
|
||||
It installs necessary packages, deploys and configures Containerd and Kubernetes. For
|
||||
details please refer to the role `deploy-env`_ and other roles (`ensure-python`_, `ensure-pip`_, `clear-firewall`_)
|
||||
used in the playbook.
|
||||
|
||||
.. note::
|
||||
The role `deploy-env`_ by default will use Google DNS servers, 8.8.8.8 or 8.8.4.4
|
||||
and update `/etc/resolv.conf` on the nodes. These DNS nameserver entries can be changed by
|
||||
updating the file ``~/osh/openstack-helm-infra/roles/deploy-env/files/resolv.conf``.
|
||||
|
||||
It also configures internal Kubernetes DNS server (Coredns) to work as a recursive DNS server
|
||||
and adds its IP address (10.96.0.10 by default) to the `/etc/resolv.conf` file.
|
||||
|
||||
Programs running on those nodes will be able to resolve names in the
|
||||
default Kubernetes domain `.svc.cluster.local`. E.g. if you run OpenStack command line
|
||||
client on one of those nodes it will be able to access OpenStack API services via
|
||||
these names.
|
||||
|
||||
.. note::
|
||||
The role `deploy-env`_ installs and confiugres Kubectl and Helm on the `primary` node.
|
||||
You can login to it via SSH, clone `openstack-helm`_ and `openstack-helm-infra`_ repositories
|
||||
and then run the OpenStack-Helm deployment scipts which employ Kubectl and Helm to deploy
|
||||
OpenStack.
|
||||
|
||||
.. _deploy-env: https://opendev.org/openstack/openstack-helm-infra/src/branch/master/roles/deploy-env
|
||||
.. _ensure-python: https://opendev.org/zuul/zuul-jobs/src/branch/master/roles/ensure-python
|
||||
.. _ensure-pip: https://opendev.org/zuul/zuul-jobs/src/branch/master/roles/ensure-pip
|
||||
.. _clear-firewall: https://opendev.org/zuul/zuul-jobs/src/branch/master/roles/clear-firewall
|
||||
.. _openstack-helm: https://opendev.org/openstack/openstack-helm.git
|
||||
.. _openstack-helm-infra: https://opendev.org/openstack/openstack-helm-infra.git
|
||||
.. _playbook: https://opendev.org/openstack/openstack-helm/src/branch/master/tools/gate/playbooks/deploy-env.yaml
|
@ -1,130 +0,0 @@
|
||||
Deploy OpenStack
|
||||
================
|
||||
|
||||
Now we are ready for the deployment of OpenStack components.
|
||||
Some of them are mandatory while others are optional.
|
||||
|
||||
Keystone
|
||||
--------
|
||||
|
||||
OpenStack Keystone is the identity and authentication service
|
||||
for the OpenStack cloud computing platform. It serves as the
|
||||
central point of authentication and authorization, managing user
|
||||
identities, roles, and access to OpenStack resources. Keystone
|
||||
ensures secure and controlled access to various OpenStack services,
|
||||
making it an integral component for user management and security
|
||||
in OpenStack deployments.
|
||||
|
||||
This is a ``mandatory`` component of any OpenStack cluster.
|
||||
|
||||
To deploy the Keystone service run the script `keystone.sh`_
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
cd ~/osh/openstack-helm
|
||||
./tools/deployment/component/keystone/keystone.sh
|
||||
|
||||
|
||||
Heat
|
||||
----
|
||||
|
||||
OpenStack Heat is an orchestration service that provides templates
|
||||
and automation for deploying and managing cloud resources. It enables
|
||||
users to define infrastructure as code, making it easier to create
|
||||
and manage complex environments in OpenStack through templates and
|
||||
automation scripts.
|
||||
|
||||
Here is the script `heat.sh`_ for the deployment of Heat service.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
cd ~/osh/openstack-helm
|
||||
./tools/deployment/component/heat/heat.sh
|
||||
|
||||
Glance
|
||||
------
|
||||
|
||||
OpenStack Glance is the image service component of OpenStack.
|
||||
It manages and catalogs virtual machine images, such as operating
|
||||
system images and snapshots, making them available for use in
|
||||
OpenStack compute instances.
|
||||
|
||||
This is a ``mandatory`` component.
|
||||
|
||||
The Glance deployment script is here `glance.sh`_.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
cd ~/osh/openstack-helm
|
||||
./tools/deployment/component/glance/glance.sh
|
||||
|
||||
Cinder
|
||||
------
|
||||
|
||||
OpenStack Cinder is the block storage service component of the
|
||||
OpenStack cloud computing platform. It manages and provides persistent
|
||||
block storage to virtual machines, enabling users to attach and detach
|
||||
persistent storage volumes to their VMs as needed.
|
||||
|
||||
To deploy the OpenStack Cinder service use the script `cinder.sh`_
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
cd ~/osh/openstack-helm
|
||||
./tools/deployment/component/cinder/cinder.sh
|
||||
|
||||
Placement, Nova, Neutron
|
||||
------------------------
|
||||
|
||||
OpenStack Placement is a service that helps manage and allocate
|
||||
resources in an OpenStack cloud environment. It helps Nova (compute)
|
||||
find and allocate the right resources (CPU, memory, etc.)
|
||||
for virtual machine instances.
|
||||
|
||||
OpenStack Nova is the compute service responsible for managing
|
||||
and orchestrating virtual machines in an OpenStack cloud.
|
||||
It provisions and schedules instances, handles their lifecycle,
|
||||
and interacts with underlying hypervisors.
|
||||
|
||||
OpenStack Neutron is the networking service that provides network
|
||||
connectivity and enables users to create and manage network resources
|
||||
for their virtual machines and other services.
|
||||
|
||||
These three services are ``mandatory`` and together constitue
|
||||
so-called ``compute kit``.
|
||||
|
||||
To set up the compute service, the first step involves deploying the
|
||||
hypervisor backend using the `libvirt.sh`_ script. By default, the
|
||||
networking service is deployed with OpenvSwitch as the networking
|
||||
backend, and the deployment script for OpenvSwitch can be found
|
||||
here: `openvswitch.sh`_. And finally the deployment script for
|
||||
Placement, Nova and Neutron is here: `compute-kit.sh`_.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
cd ~/osh/openstack-helm
|
||||
./tools/deployment/component/compute-kit/openvswitch.sh
|
||||
./tools/deployment/component/compute-kit/libvirt.sh
|
||||
./tools/deployment/component/compute-kit/compute-kit.sh
|
||||
|
||||
Horizon
|
||||
-------
|
||||
|
||||
OpenStack Horizon is the web application that is intended to provide a graphic
|
||||
user interface to Openstack services.
|
||||
|
||||
To deploy the OpenStack Horizon use the following script `horizon.sh`_
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
cd ~/osh/openstack-helm
|
||||
./tools/deployment/component/horizon/horizon.sh
|
||||
|
||||
.. _keystone.sh: https://opendev.org/openstack/openstack-helm/src/branch/master/tools/deployment/component/keystone/keystone.sh
|
||||
.. _heat.sh: https://opendev.org/openstack/openstack-helm/src/branch/master/tools/deployment/component/heat/heat.sh
|
||||
.. _glance.sh: https://opendev.org/openstack/openstack-helm/src/branch/master/tools/deployment/component/glance/glance.sh
|
||||
.. _libvirt.sh: https://opendev.org/openstack/openstack-helm/src/branch/master/tools/deployment/component/compute-kit/libvirt.sh
|
||||
.. _openvswitch.sh: https://opendev.org/openstack/openstack-helm/src/branch/master/tools/deployment/component/compute-kit/openvswitch.sh
|
||||
.. _compute-kit.sh: https://opendev.org/openstack/openstack-helm/src/branch/master/tools/deployment/component/compute-kit/compute-kit.sh
|
||||
.. _cinder.sh: https://opendev.org/openstack/openstack-helm/src/branch/master/tools/deployment/component/cinder/cinder.sh
|
||||
.. _horizon.sh: https://opendev.org/openstack/openstack-helm/src/branch/master/tools/deployment/component/horizon/horizon.sh
|
@ -1,54 +0,0 @@
|
||||
Deploy OpenStack backend
|
||||
========================
|
||||
|
||||
OpenStack is a cloud computing platform that consists of a variety of
|
||||
services, and many of these services rely on backend services like RabbitMQ,
|
||||
MariaDB, and Memcached for their proper functioning. These backend services
|
||||
play crucial roles in OpenStack's architecture.
|
||||
|
||||
RabbitMQ
|
||||
~~~~~~~~
|
||||
RabbitMQ is a message broker that is often used in OpenStack to handle
|
||||
messaging between different components and services. It helps in managing
|
||||
communication and coordination between various parts of the OpenStack
|
||||
infrastructure. Services like Nova (compute), Neutron (networking), and
|
||||
Cinder (block storage) use RabbitMQ to exchange messages and ensure
|
||||
proper orchestration.
|
||||
|
||||
MariaDB
|
||||
~~~~~~~
|
||||
Database services like MariaDB are used as the backend database for several
|
||||
OpenStack services. These databases store critical information such as user
|
||||
credentials, service configurations, and data related to instances, networks,
|
||||
and volumes. Services like Keystone (identity), Nova, Glance (image), and
|
||||
Cinder rely on MariaDB for data storage.
|
||||
|
||||
Memcached
|
||||
~~~~~~~~~
|
||||
Memcached is a distributed memory object caching system that is often used
|
||||
in OpenStack to improve performance and reduce database load. OpenStack
|
||||
services cache frequently accessed data in Memcached, which helps in faster
|
||||
data retrieval and reduces the load on the database backend. Services like
|
||||
Keystone and Nova can benefit from Memcached for caching.
|
||||
|
||||
Deployment
|
||||
----------
|
||||
|
||||
The following scripts `rabbitmq.sh`_, `mariadb.sh`_, `memcached.sh`_ can be used to
|
||||
deploy the backend services.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
cd ~/osh/openstack-helm
|
||||
./tools/deployment/component/common/rabbitmq.sh
|
||||
./tools/deployment/component/common/mariadb.sh
|
||||
./tools/deployment/component/common/memcached.sh
|
||||
|
||||
.. note::
|
||||
These scripts use Helm charts from the `openstack-helm-infra`_ repository. We assume
|
||||
this repo is cloned to the `~/osh` directory. See this :doc:`section </install/before_deployment>`.
|
||||
|
||||
.. _rabbitmq.sh: https://opendev.org/openstack/openstack-helm/src/branch/master/tools/deployment/component/common/rabbitmq.sh
|
||||
.. _mariadb.sh: https://opendev.org/openstack/openstack-helm/src/branch/master/tools/deployment/component/common/mariadb.sh
|
||||
.. _memcached.sh: https://opendev.org/openstack/openstack-helm/src/branch/master/tools/deployment/component/common/memcached.sh
|
||||
.. _openstack-helm-infra: https://opendev.org/openstack/openstack-helm-infra.git
|
@ -1,17 +1,12 @@
|
||||
Installation
|
||||
============
|
||||
|
||||
Contents:
|
||||
Here are sections that describe how to install OpenStack using OpenStack-Helm:
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
before_deployment
|
||||
deploy_kubernetes
|
||||
prepare_kubernetes
|
||||
deploy_ceph
|
||||
setup_openstack_client
|
||||
deploy_ingress_controller
|
||||
deploy_openstack_backend
|
||||
deploy_openstack
|
||||
|
||||
before_starting
|
||||
kubernetes
|
||||
prerequisites
|
||||
openstack
|
||||
|
Before Width: | Height: | Size: 108 KiB After Width: | Height: | Size: 108 KiB |
182
doc/source/install/kubernetes.rst
Normal file
182
doc/source/install/kubernetes.rst
Normal file
@ -0,0 +1,182 @@
|
||||
Kubernetes
|
||||
==========
|
||||
|
||||
OpenStack-Helm provides charts that can be deployed on any Kubernetes cluster if it meets
|
||||
the version :doc:`requirements </readme>`. However, deploying the Kubernetes cluster itself is beyond
|
||||
the scope of OpenStack-Helm.
|
||||
|
||||
You can use any Kubernetes deployment tool for this purpose. In this guide, we detail how to set up
|
||||
a Kubernetes cluster using Kubeadm and Ansible. While not production-ready, this cluster is ideal
|
||||
as a starting point for lab or proof-of-concept environments.
|
||||
|
||||
All OpenStack projects test their code through an infrastructure managed by the CI
|
||||
tool, Zuul, which executes Ansible playbooks on one or more test nodes. Therefore, we employ Ansible
|
||||
roles/playbooks to install required packages, deploy Kubernetes, and then execute tests on it.
|
||||
|
||||
To establish a test environment, the Ansible role `deploy-env`_ is employed. This role deploys
|
||||
a basic single/multi-node Kubernetes cluster, used to prove the functionality of commonly used
|
||||
deployment configurations. The role is compatible with Ubuntu Focal and Ubuntu Jammy distributions.
|
||||
|
||||
.. note::
|
||||
The role `deploy-env`_ is not idempotent and assumed to be applied to a clean environment.
|
||||
|
||||
Clone roles git repositories
|
||||
----------------------------
|
||||
|
||||
Before proceeding with the steps outlined in the following sections, it is
|
||||
imperative that you clone the git repositories containing the required Ansible roles.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
mkdir ~/osh
|
||||
cd ~/osh
|
||||
git clone https://opendev.org/openstack/openstack-helm-infra.git
|
||||
git clone https://opendev.org/zuul/zuul-jobs.git
|
||||
|
||||
Install Ansible
|
||||
---------------
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
pip install ansible
|
||||
|
||||
Set roles lookup path
|
||||
---------------------
|
||||
|
||||
Now let's set the environment variable ``ANSIBLE_ROLES_PATH`` which specifies
|
||||
where Ansible will lookup roles
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
export ANSIBLE_ROLES_PATH=~/osh/openstack-helm-infra/roles:~/osh/zuul-jobs/roles
|
||||
|
||||
To avoid setting it every time when you start a new terminal instance you can define this
|
||||
in the Ansible configuration file. Please see the Ansible documentation.
|
||||
|
||||
Prepare inventory
|
||||
-----------------
|
||||
|
||||
The example below assumes that there are four nodes which must be available via
|
||||
SSH using the public key authentication and a ssh user (let say ``ubuntu``)
|
||||
must have passwordless sudo on the nodes.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
cat > ~/osh/inventory.yaml <<EOF
|
||||
---
|
||||
all:
|
||||
vars:
|
||||
ansible_port: 22
|
||||
ansible_user: ubuntu
|
||||
ansible_ssh_private_key_file: /home/ubuntu/.ssh/id_rsa
|
||||
ansible_ssh_extra_args: -o StrictHostKeyChecking=no
|
||||
# The user and group that will be used to run Kubectl and Helm commands.
|
||||
kubectl:
|
||||
user: ubuntu
|
||||
group: ubuntu
|
||||
# The user and group that will be used to run Docker commands.
|
||||
docker_users:
|
||||
- ununtu
|
||||
# The MetalLB controller will be installed on the Kubernetes cluster.
|
||||
metallb_setup: true
|
||||
# Loopback devices will be created on all cluster nodes which then can be used
|
||||
# to deploy a Ceph cluster which requires block devices to be provided.
|
||||
# Please use loopback devices only for testing purposes. They are not suitable
|
||||
# for production due to performance reasons.
|
||||
loopback_setup: true
|
||||
loopback_device: /dev/loop100
|
||||
loopback_image: /var/lib/openstack-helm/ceph-loop.img
|
||||
loopback_image_size: 12G
|
||||
children:
|
||||
# The primary node where Kubectl and Helm will be installed. If it is
|
||||
# the only node then it must be a member of the groups k8s_cluster and
|
||||
# k8s_control_plane. If there are more nodes then the wireguard tunnel
|
||||
# will be established between the primary node and the k8s_control_plane node.
|
||||
primary:
|
||||
hosts:
|
||||
primary:
|
||||
ansible_host: 10.10.10.10
|
||||
# The nodes where the Kubernetes components will be installed.
|
||||
k8s_cluster:
|
||||
hosts:
|
||||
node-1:
|
||||
ansible_host: 10.10.10.11
|
||||
node-2:
|
||||
ansible_host: 10.10.10.12
|
||||
node-3:
|
||||
ansible_host: 10.10.10.13
|
||||
# The control plane node where the Kubernetes control plane components will be installed.
|
||||
# It must be the only node in the group k8s_control_plane.
|
||||
k8s_control_plane:
|
||||
hosts:
|
||||
node-1:
|
||||
ansible_host: 10.10.10.11
|
||||
# These are Kubernetes worker nodes. There could be zero such nodes.
|
||||
# In this case the Openstack workloads will be deployed on the control plane node.
|
||||
k8s_nodes:
|
||||
hosts:
|
||||
node-2:
|
||||
ansible_host: 10.10.10.12
|
||||
node-3:
|
||||
ansible_host: 10.10.10.13
|
||||
EOF
|
||||
|
||||
.. note::
|
||||
If you would like to set up a Kubernetes cluster on the local host,
|
||||
configure the Ansible inventory to designate the ``primary`` node as the local host.
|
||||
For further guidance, please refer to the Ansible documentation.
|
||||
|
||||
.. note::
|
||||
The full list of variables that you can define in the inventory file can be found in the
|
||||
file `deploy-env/defaults/main.yaml`_.
|
||||
|
||||
Prepare playbook
|
||||
----------------
|
||||
|
||||
Create an Ansible playbook that will deploy the environment
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
cat > ~/osh/deploy-env.yaml <<EOF
|
||||
---
|
||||
- hosts: all
|
||||
become: true
|
||||
gather_facts: true
|
||||
roles:
|
||||
- ensure-python
|
||||
- ensure-pip
|
||||
- clear-firewall
|
||||
- deploy-env
|
||||
EOF
|
||||
|
||||
Run the playbook
|
||||
-----------------
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
cd ~/osh
|
||||
ansible-playbook -i inventory.yaml deploy-env.yaml
|
||||
|
||||
The playbook only changes the state of the nodes listed in the inventory file.
|
||||
|
||||
It installs necessary packages, deploys and configures Containerd and Kubernetes. For
|
||||
details please refer to the role `deploy-env`_ and other roles (`ensure-python`_,
|
||||
`ensure-pip`_, `clear-firewall`_) used in the playbook.
|
||||
|
||||
.. note::
|
||||
The role `deploy-env`_ configures cluster nodes to use Google DNS servers (8.8.8.8).
|
||||
|
||||
By default, it also configures internal Kubernetes DNS server (Coredns) to work
|
||||
as a recursive DNS server and adds its IP address (10.96.0.10 by default) to the
|
||||
``/etc/resolv.conf`` file.
|
||||
|
||||
Processes running on the cluster nodes will be able to resolve internal
|
||||
Kubernetes domain names ``*.svc.cluster.local``.
|
||||
|
||||
.. _deploy-env: https://opendev.org/openstack/openstack-helm-infra/src/branch/master/roles/deploy-env
|
||||
.. _deploy-env/defaults/main.yaml: https://opendev.org/openstack/openstack-helm-infra/src/branch/master/roles/deploy-env/defaults/main.yaml
|
||||
.. _zuul-jobs: https://opendev.org/zuul/zuul-jobs.git
|
||||
.. _ensure-python: https://opendev.org/zuul/zuul-jobs/src/branch/master/roles/ensure-python
|
||||
.. _ensure-pip: https://opendev.org/zuul/zuul-jobs/src/branch/master/roles/ensure-pip
|
||||
.. _clear-firewall: https://opendev.org/zuul/zuul-jobs/src/branch/master/roles/clear-firewall
|
||||
.. _openstack-helm-infra: https://opendev.org/openstack/openstack-helm-infra.git
|
415
doc/source/install/openstack.rst
Normal file
415
doc/source/install/openstack.rst
Normal file
@ -0,0 +1,415 @@
|
||||
Deploy OpenStack
|
||||
================
|
||||
|
||||
Check list before deployment
|
||||
----------------------------
|
||||
|
||||
At this point we assume all the prerequisites listed below are met:
|
||||
|
||||
- Kubernetes cluster is up and running.
|
||||
- `kubectl`_ and `helm`_ command line tools are installed and
|
||||
configured to access the cluster.
|
||||
- The OpenStack-Helm repositories are enabled, OpenStack-Helm
|
||||
plugin is installed and necessary environment variables are set.
|
||||
- The ``openstack`` namespace is created.
|
||||
- Ingress controller is deployed in the ``openstack`` namespace.
|
||||
- MetalLB is deployed and configured. The service of type
|
||||
``LoadBalancer`` is created and DNS is configured to resolve the
|
||||
Openstack endpoint names to the IP address of the service.
|
||||
- Ceph is deployed and enabled for using by OpenStack-Helm.
|
||||
|
||||
.. _kubectl: https://kubernetes.io/docs/tasks/tools/install-kubectl/
|
||||
.. _helm: https://helm.sh/docs/intro/install/
|
||||
|
||||
|
||||
Environment variables
|
||||
---------------------
|
||||
|
||||
First let's set environment variables that are later used in the subsequent sections:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
export OPENSTACK_RELEASE=2024.1
|
||||
# Features enabled for the deployment. This is used to look up values overrides.
|
||||
export FEATURES="${OPENSTACK_RELEASE} ubuntu_jammy"
|
||||
# Directory where values overrides are looked up or downloaded to.
|
||||
export OVERRIDES_DIR=$(pwd)/overrides
|
||||
|
||||
Get values overrides
|
||||
--------------------
|
||||
|
||||
OpenStack-Helm provides values overrides for predefined feature sets and various
|
||||
OpenStack/platform versions. The overrides are stored in the OpenStack-Helm
|
||||
git repositories and OpenStack-Helm plugin provides a command to look them up
|
||||
locally and download (optional) if not found.
|
||||
|
||||
Please read the help:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
helm osh get-values-overrides --help
|
||||
|
||||
For example, if you pass the feature set ``2024.1 ubuntu_jammy`` it will try to
|
||||
look up the following files:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
2024.1.yaml
|
||||
ubuntu_jammy.yaml
|
||||
2024.1-ubuntu_jammy.yaml
|
||||
|
||||
Let's download the values overrides for the feature set defined above:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
INFRA_OVERRIDES_URL=https://opendev.org/openstack/openstack-helm-infra/raw/branch/master
|
||||
for chart in rabbitmq mariadb memcached openvswitch libvirt; do
|
||||
helm osh get-values-overrides -d -u ${INFRA_OVERRIDES_URL} -p ${OVERRIDES_DIR} -c ${chart} ${FEATURES}
|
||||
done
|
||||
|
||||
OVERRIDES_URL=https://opendev.org/openstack/openstack-helm/raw/branch/master
|
||||
for chart in keystone heat glance cinder placement nova neutron horizon; do
|
||||
helm osh get-values-overrides -d -u ${OVERRIDES_URL} -p ${OVERRIDES_DIR} -c ${chart} ${FEATURES}
|
||||
done
|
||||
|
||||
Now you can inspect the downloaded files in the ``${OVERRIDES_DIR}`` directory and
|
||||
adjust them if needed.
|
||||
|
||||
OpenStack backend
|
||||
-----------------
|
||||
|
||||
OpenStack is a cloud computing platform that consists of a variety of
|
||||
services, and many of these services rely on backend services like RabbitMQ,
|
||||
MariaDB, and Memcached for their proper functioning. These backend services
|
||||
play crucial role in OpenStack architecture.
|
||||
|
||||
RabbitMQ
|
||||
~~~~~~~~
|
||||
RabbitMQ is a message broker that is often used in OpenStack to handle
|
||||
messaging between different components and services. It helps in managing
|
||||
communication and coordination between various parts of the OpenStack
|
||||
infrastructure. Services like Nova (compute), Neutron (networking), and
|
||||
Cinder (block storage) use RabbitMQ to exchange messages and ensure
|
||||
proper orchestration.
|
||||
|
||||
Use the following script to deploy RabbitMQ service:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
helm upgrade --install rabbitmq openstack-helm-infra/rabbitmq \
|
||||
--namespace=openstack \
|
||||
--set pod.replicas.server=1 \
|
||||
--timeout=600s \
|
||||
$(helm osh get-values-overrides -p ${OVERRIDES_DIR} -c rabbitmq ${FEATURES})
|
||||
|
||||
helm osh wait-for-pods openstack
|
||||
|
||||
MariaDB
|
||||
~~~~~~~
|
||||
Database services like MariaDB are used as a backend database for majority of
|
||||
OpenStack projects. These databases store critical information such as user
|
||||
credentials, service configurations, and data related to instances, networks,
|
||||
and volumes. Services like Keystone (identity), Nova, Glance (image), and
|
||||
Cinder rely on MariaDB for data storage.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
helm upgrade --install mariadb openstack-helm-infra/mariadb \
|
||||
--namespace=openstack \
|
||||
--set pod.replicas.server=1 \
|
||||
$(helm osh get-values-overrides -p ${OVERRIDES_DIR} -c mariadb ${FEATURES})
|
||||
|
||||
helm osh wait-for-pods openstack
|
||||
|
||||
Memcached
|
||||
~~~~~~~~~
|
||||
Memcached is a distributed memory object caching system that is often used
|
||||
in OpenStack to improve performance. OpenStack services cache frequently
|
||||
accessed data in Memcached, which helps in faster
|
||||
data retrieval and reduces the load on the database backend.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
helm upgrade --install memcached openstack-helm-infra/memcached \
|
||||
--namespace=openstack \
|
||||
$(helm osh get-values-overrides -p ${OVERRIDES_DIR} -c memcached ${FEATURES})
|
||||
|
||||
helm osh wait-for-pods openstack
|
||||
|
||||
OpenStack
|
||||
---------
|
||||
|
||||
Now we are ready for the deployment of OpenStack components.
|
||||
Some of them are mandatory while others are optional.
|
||||
|
||||
Keystone
|
||||
~~~~~~~~
|
||||
|
||||
OpenStack Keystone is the identity and authentication service
|
||||
for the OpenStack cloud computing platform. It serves as the
|
||||
central point of authentication and authorization, managing user
|
||||
identities, roles, and access to OpenStack resources. Keystone
|
||||
ensures secure and controlled access to various OpenStack services,
|
||||
making it an integral component for user management and security
|
||||
in OpenStack deployments.
|
||||
|
||||
This is a ``mandatory`` component of any OpenStack cluster.
|
||||
|
||||
To deploy the Keystone service run the following:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
helm upgrade --install keystone openstack-helm/keystone \
|
||||
--namespace=openstack \
|
||||
$(helm osh get-values-overrides -p ${OVERRIDES_DIR} -c keystone ${FEATURES})
|
||||
|
||||
helm osh wait-for-pods openstack
|
||||
|
||||
Heat
|
||||
~~~~
|
||||
|
||||
OpenStack Heat is an orchestration service that provides templates
|
||||
and automation for deploying and managing cloud resources. It enables
|
||||
users to define infrastructure as code, making it easier to create
|
||||
and manage complex environments in OpenStack through templates and
|
||||
automation scripts.
|
||||
|
||||
Here are the commands for the deployment of Heat service.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
helm upgrade --install heat openstack-helm/heat \
|
||||
--namespace=openstack \
|
||||
$(helm osh get-values-overrides -p ${OVERRIDES_DIR} -c heat ${FEATURES})
|
||||
|
||||
helm osh wait-for-pods openstack
|
||||
|
||||
Glance
|
||||
~~~~~~
|
||||
|
||||
OpenStack Glance is the image service component of OpenStack.
|
||||
It manages and catalogs virtual machine images, such as operating
|
||||
system images and snapshots, making them available for use in
|
||||
OpenStack compute instances.
|
||||
|
||||
This is a ``mandatory`` component.
|
||||
|
||||
The Glance deployment commands are as follows:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
tee ${OVERRIDES_DIR}/glance/values_overrides/glance_pvc_storage.yaml <<EOF
|
||||
storage: pvc
|
||||
volume:
|
||||
class_name: general
|
||||
size: 10Gi
|
||||
EOF
|
||||
|
||||
helm upgrade --install glance openstack-helm/glance \
|
||||
--namespace=openstack \
|
||||
$(helm osh get-values-overrides -p ${OVERRIDES_DIR} -c glance glance_pvc_storage ${FEATURES})
|
||||
|
||||
helm osh wait-for-pods openstack
|
||||
|
||||
.. note::
|
||||
|
||||
In the above we prepare a values override file for ``glance`` chart which
|
||||
makes it use a Persistent Volume Claim (PVC) for storing images. We put
|
||||
the values in the ``${OVERRIDES_DIR}/glance/values_overrides/glance_pvc_storage.yaml``
|
||||
so the OpenStack-Helm plugin can pick it up if we pass the feature
|
||||
``glance_pvc_storage`` to it.
|
||||
|
||||
Cinder
|
||||
~~~~~~
|
||||
|
||||
OpenStack Cinder is the block storage service component of the
|
||||
OpenStack cloud computing platform. It manages and provides persistent
|
||||
block storage to virtual machines, enabling users to attach and detach
|
||||
persistent storage volumes to their VMs as needed.
|
||||
|
||||
To deploy the OpenStack Cinder use the following
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
helm upgrade --install cinder openstack-helm/cinder \
|
||||
--namespace=openstack \
|
||||
--timeout=600s \
|
||||
$(helm osh get-values-overrides -p ${OVERRIDES_DIR} -c cinder ${FEATURES})
|
||||
|
||||
helm osh wait-for-pods openstack
|
||||
|
||||
Compute kit backend: Openvswitch and Libvirt
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
OpenStack-Helm recommends using OpenvSwitch as the networking backend
|
||||
for the OpenStack cloud. OpenvSwitch is a software-based, open-source
|
||||
networking solution that provides virtual switching capabilities.
|
||||
|
||||
To deploy the OpenvSwitch service use the following:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
helm upgrade --install openvswitch openstack-helm-infra/openvswitch \
|
||||
--namespace=openstack \
|
||||
$(helm osh get-values-overrides -p ${OVERRIDES_DIR} -c openvswitch ${FEATURES})
|
||||
|
||||
helm osh wait-for-pods openstack
|
||||
|
||||
Libvirt is a toolkit that provides a common API for managing virtual
|
||||
machines. It is used in OpenStack to interact with hypervisors like
|
||||
KVM, QEMU, and Xen.
|
||||
|
||||
Let's deploy the Libvirt service using the following command:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
helm upgrade --install libvirt openstack-helm-infra/libvirt \
|
||||
--namespace=openstack \
|
||||
--set conf.ceph.enabled=true \
|
||||
$(helm osh get-values-overrides -p ${OVERRIDES_DIR} -c libvirt ${FEATURES})
|
||||
|
||||
.. note::
|
||||
Here we don't need to run ``helm osh wait-for-pods`` because the Libvirt pods
|
||||
depend on Neutron OpenvSwitch agent pods which are not yet deployed.
|
||||
|
||||
Compute kit: Placement, Nova, Neutron
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
OpenStack Placement is a service that helps manage and allocate
|
||||
resources in an OpenStack cloud environment. It helps Nova (compute)
|
||||
find and allocate the right resources (CPU, memory, etc.)
|
||||
for virtual machine instances.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
helm upgrade --install placement openstack-helm/placement
|
||||
--namespace=openstack \
|
||||
$(helm osh get-values-overrides -p ${OVERRIDES_DIR} -c placement ${FEATURES})
|
||||
|
||||
OpenStack Nova is the compute service responsible for managing
|
||||
and orchestrating virtual machines in an OpenStack cloud.
|
||||
It provisions and schedules instances, handles their lifecycle,
|
||||
and interacts with underlying hypervisors.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
helm upgrade --install nova openstack-helm/nova \
|
||||
--namespace=openstack \
|
||||
--set bootstrap.wait_for_computes.enabled=true \
|
||||
--set conf.ceph.enabled=true \
|
||||
$(helm osh get-values-overrides -p ${OVERRIDES_DIR} -c nova ${FEATURES})
|
||||
|
||||
OpenStack Neutron is the networking service that provides network
|
||||
connectivity and enables users to create and manage network resources
|
||||
for their virtual machines and other services.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
PROVIDER_INTERFACE=<provider_interface_name>
|
||||
tee ${OVERRIDES_DIR}/neutron/values_overrides/neutron_simple.yaml << EOF
|
||||
conf:
|
||||
neutron:
|
||||
DEFAULT:
|
||||
l3_ha: False
|
||||
max_l3_agents_per_router: 1
|
||||
# <provider_interface_name> will be attached to the br-ex bridge.
|
||||
# The IP assigned to the interface will be moved to the bridge.
|
||||
auto_bridge_add:
|
||||
br-ex: ${PROVIDER_INTERFACE}
|
||||
plugins:
|
||||
ml2_conf:
|
||||
ml2_type_flat:
|
||||
flat_networks: public
|
||||
openvswitch_agent:
|
||||
ovs:
|
||||
bridge_mappings: public:br-ex
|
||||
EOF
|
||||
|
||||
helm upgrade --install neutron openstack-helm/neutron \
|
||||
--namespace=openstack \
|
||||
$(helm osh get-values-overrides -p ${OVERRIDES_DIR} -c neutron neutron_simple ${FEATURES})
|
||||
|
||||
helm osh wait-for-pods openstack
|
||||
|
||||
Horizon
|
||||
~~~~~~~
|
||||
|
||||
OpenStack Horizon is the web application that is intended to provide a graphic
|
||||
user interface to Openstack services.
|
||||
|
||||
Let's deploy it:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
helm upgrade --install neutron openstack-helm/neutron \
|
||||
--namespace=openstack \
|
||||
$(helm osh get-values-overrides -p ${OVERRIDES_DIR} -c horizon ${FEATURES})
|
||||
|
||||
helm osh wait-for-pods openstack
|
||||
|
||||
OpenStack client
|
||||
----------------
|
||||
|
||||
Installing the OpenStack client on the developer's machine is a vital step.
|
||||
The easiest way to install the OpenStack client is to create a Python
|
||||
virtual environment and install the client using ``pip``.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
python3 -m venv ~/openstack-client
|
||||
source ~/openstack-client/bin/activate
|
||||
pip install python-openstackclient
|
||||
|
||||
Now let's prepare the OpenStack client configuration file:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
mkdir -p ~/.config/openstack
|
||||
tee ~/.config/openstack/clouds.yaml << EOF
|
||||
clouds:
|
||||
openstack_helm:
|
||||
region_name: RegionOne
|
||||
identity_api_version: 3
|
||||
auth:
|
||||
username: 'admin'
|
||||
password: 'password'
|
||||
project_name: 'admin'
|
||||
project_domain_name: 'default'
|
||||
user_domain_name: 'default'
|
||||
auth_url: 'http://keystone.openstack.svc.cluster.local/v3'
|
||||
|
||||
That is it! Now you can use the OpenStack client. Try to run this:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
openstack --os-cloud openstack_helm endpoint list
|
||||
|
||||
.. note::
|
||||
|
||||
In some cases it is more convenient to use the OpenStack client
|
||||
inside a Docker container. OpenStack-Helm provides the
|
||||
`openstackhelm/openstack-client`_ image. The below is an example
|
||||
of how to use it.
|
||||
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
docker run -it --rm --network host \
|
||||
-v ~/.config/openstack/clouds.yaml:/etc/openstack/clouds.yaml \
|
||||
-e OS_CLOUD=openstack_helm \
|
||||
docker.io/openstackhelm/openstack-client:${OPENSTACK_RELEASE} \
|
||||
openstack endpoint list
|
||||
|
||||
Remember that the container file system is ephemeral and is destroyed
|
||||
when you stop the container. So if you would like to use the
|
||||
Openstack client capabilities interfacing with the file system then you have to mount
|
||||
a directory from the host file system where necessary files are located.
|
||||
For example, this is useful when you create a key pair and save the private key in a file
|
||||
which is then used for ssh access to VMs. Or it could be Heat templates
|
||||
which you prepare in advance and then use with Openstack client.
|
||||
|
||||
For convenience, you can create an executable entry point that runs the
|
||||
Openstack client in a Docker container. See for example `setup-client.sh`_.
|
||||
|
||||
.. _setup-client.sh: https://opendev.org/openstack/openstack-helm/src/branch/master/tools/deployment/common/setup-client.sh
|
||||
.. _openstackhelm/openstack-client: https://hub.docker.com/r/openstackhelm/openstack-client/tags?page=&page_size=&ordering=&name=
|
@ -1,28 +0,0 @@
|
||||
Prepare Kubernetes
|
||||
==================
|
||||
|
||||
In this section we assume you have a working Kubernetes cluster and
|
||||
Kubectl and Helm properly configured to interact with the cluster.
|
||||
|
||||
Before deploying OpenStack components using OpenStack-Helm you have to set
|
||||
labels on Kubernetes worker nodes which are used as node selectors.
|
||||
|
||||
Also necessary namespaces must be created.
|
||||
|
||||
You can use the `prepare-k8s.sh`_ script as an example of how to prepare
|
||||
the Kubernetes cluster for OpenStack deployment. The script is assumed to be run
|
||||
from the openstack-helm repository
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
cd ~/osh/openstack-helm
|
||||
./tools/deployment/common/prepare-k8s.sh
|
||||
|
||||
|
||||
.. note::
|
||||
Pay attention that in the above script we set labels on all Kubernetes nodes including
|
||||
Kubernetes control plane nodes which are usually not aimed to run workload pods
|
||||
(OpenStack in our case). So you have to either untaint control plane nodes or modify the
|
||||
`prepare-k8s.sh`_ script so it sets labels only on the worker nodes.
|
||||
|
||||
.. _prepare-k8s.sh: https://opendev.org/openstack/openstack-helm/src/branch/master/tools/deployment/common/prepare-k8s.sh
|
328
doc/source/install/prerequisites.rst
Normal file
328
doc/source/install/prerequisites.rst
Normal file
@ -0,0 +1,328 @@
|
||||
Kubernetes prerequisites
|
||||
========================
|
||||
|
||||
Ingress controller
|
||||
------------------
|
||||
|
||||
Ingress controller when deploying OpenStack on Kubernetes
|
||||
is essential to ensure proper external access for the OpenStack services.
|
||||
|
||||
We recommend using the `ingress-nginx`_ because it is simple and provides
|
||||
all necessary features. It utilizes Nginx as a reverse proxy backend.
|
||||
Here is how to deploy it.
|
||||
|
||||
First, let's create a namespace for the OpenStack workloads. The ingress
|
||||
controller must be deployed in the same namespace because OpenStack-Helm charts
|
||||
create service resources pointing to the ingress controller pods which
|
||||
in turn redirect traffic to particular Openstack API pods.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
tee > /tmp/openstack_namespace.yaml <<EOF
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: openstack
|
||||
EOF
|
||||
kubectl apply -f /tmp/openstack_namespace.yaml
|
||||
|
||||
Next, deploy the ingress controller in the ``openstack`` namespace:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
|
||||
helm upgrade --install ingress-nginx ingress-nginx/ingress-nginx \
|
||||
--version="4.8.3" \
|
||||
--namespace=openstack \
|
||||
--set controller.kind=Deployment \
|
||||
--set controller.admissionWebhooks.enabled="false" \
|
||||
--set controller.scope.enabled="true" \
|
||||
--set controller.service.enabled="false" \
|
||||
--set controller.ingressClassResource.name=nginx \
|
||||
--set controller.ingressClassResource.controllerValue="k8s.io/ingress-nginx" \
|
||||
--set controller.ingressClassResource.default="false" \
|
||||
--set controller.ingressClass=nginx \
|
||||
--set controller.labels.app=ingress-api
|
||||
|
||||
You can deploy any other ingress controller that suits your needs best.
|
||||
See for example the list of available `ingress controllers`_.
|
||||
Ensure that the ingress controller pods are deployed with the ``app: ingress-api``
|
||||
label which is used by the OpenStack-Helm as a selector for the Kubernetes
|
||||
service resources.
|
||||
|
||||
For example, the OpenStack-Helm ``keystone`` chart by default creates a service
|
||||
that redirects traffic to the ingress controller pods selected using the
|
||||
``app: ingress-api`` label. Then it also creates an ``Ingress`` resource which
|
||||
the ingress controller then uses to configure its reverse proxy
|
||||
backend (Nginx) which eventually routes the traffic to the Keystone API
|
||||
service which works as an endpoint for Keystone API pods.
|
||||
|
||||
.. image:: ingress.jpg
|
||||
:width: 100%
|
||||
:align: center
|
||||
:alt: ingress scheme
|
||||
|
||||
.. note::
|
||||
For exposing the OpenStack services to the external world, we can a create
|
||||
service of type ``LoadBalancer`` or ``NodePort`` with the selector pointing to
|
||||
the ingress controller pods.
|
||||
|
||||
.. _ingress-nginx: https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/README.md
|
||||
.. _ingress controllers: https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/
|
||||
|
||||
MetalLB
|
||||
-------
|
||||
|
||||
MetalLB is a load-balancer for bare metal Kubernetes clusters levereging
|
||||
L2/L3 protocols. This is a popular way of exposing the web
|
||||
applications running in Kubernetes to the external world.
|
||||
|
||||
The following commands can be used to deploy MetalLB:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
tee > /tmp/metallb_system_namespace.yaml <<EOF
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: metallb-system
|
||||
EOF
|
||||
kubectl apply -f /tmp/metallb_system_namespace.yaml
|
||||
|
||||
helm repo add metallb https://metallb.github.io/metallb
|
||||
helm install metallb metallb/metallb -n metallb-system
|
||||
|
||||
Now it is necessary to configure the MetalLB IP address pool and the IP address
|
||||
advertisement. The MetalLB custom resources are used for this:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
tee > /tmp/metallb_ipaddresspool.yaml <<EOF
|
||||
---
|
||||
apiVersion: metallb.io/v1beta1
|
||||
kind: IPAddressPool
|
||||
metadata:
|
||||
name: public
|
||||
namespace: metallb-system
|
||||
spec:
|
||||
addresses:
|
||||
- "172.24.128.0/24"
|
||||
EOF
|
||||
|
||||
kubectl apply -f /tmp/metallb_ipaddresspool.yaml
|
||||
|
||||
tee > /tmp/metallb_l2advertisement.yaml <<EOF
|
||||
---
|
||||
apiVersion: metallb.io/v1beta1
|
||||
kind: L2Advertisement
|
||||
metadata:
|
||||
name: public
|
||||
namespace: metallb-system
|
||||
spec:
|
||||
ipAddressPools:
|
||||
- public
|
||||
EOF
|
||||
|
||||
kubectl apply -f /tmp/metallb_l2advertisement.yaml
|
||||
|
||||
Next, let's create a service of type ``LoadBalancer`` which will the
|
||||
public endpoint for all OpenStack services that we will later deploy.
|
||||
The MetalLB will assign an IP address to it (we can assinged a dedicated
|
||||
IP using annotations):
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
tee > /tmp/openstack_endpoint_service.yaml <<EOF
|
||||
---
|
||||
kind: Service
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: public-openstack
|
||||
namespace: openstack
|
||||
annotations:
|
||||
metallb.universe.tf/loadBalancerIPs: "172.24.128.100"
|
||||
spec:
|
||||
externalTrafficPolicy: Cluster
|
||||
type: LoadBalancer
|
||||
selector:
|
||||
app: ingress-api
|
||||
ports:
|
||||
- name: http
|
||||
port: 80
|
||||
- name: https
|
||||
port: 443
|
||||
EOF
|
||||
|
||||
This service will redirect the traffic to the ingress controller pods
|
||||
(see the ``app: ingress-api`` selector). OpenStack-Helm charts create
|
||||
``Ingress`` resources which are used by the ingress controller to configure the
|
||||
reverse proxy backend so that the traffic eventually goes to particular
|
||||
Openstack API pods.
|
||||
|
||||
By default, the ``Ingress`` objects will only contain rules for the
|
||||
``openstack.svc.cluster.local`` DNS domain. This is the internal Kubernetes domain
|
||||
and it is not supposed to be used outside the cluster. However, we can use
|
||||
the Dnsmasq to resolve the ``*.openstack.svc.cluster.local`` names to the
|
||||
``LoadBalancer`` service IP address.
|
||||
|
||||
The following command will start the Dnsmasq container with the necessary configuration:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
docker run -d --name dnsmasq --restart always \
|
||||
--cap-add=NET_ADMIN \
|
||||
--network=host \
|
||||
--entrypoint dnsmasq \
|
||||
docker.io/openstackhelm/neutron:2024.1-ubuntu_jammy \
|
||||
--keep-in-foreground \
|
||||
--no-hosts \
|
||||
--bind-interfaces \
|
||||
--address="/openstack.svc.cluster.local/172.24.128.100" \
|
||||
--listen-address="172.17.0.1" \
|
||||
--no-resolv \
|
||||
--server=8.8.8.8
|
||||
|
||||
The ``--network=host`` option is used to start the Dnsmasq container in the
|
||||
host network namespace and the ``--listen-address`` option is used to bind the
|
||||
Dnsmasq to a specific IP. Please use the configuration that suits your environment.
|
||||
|
||||
Now we can add the Dnsmasq IP to the ``/etc/resolv.conf`` file
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
echo "nameserver 172.17.0.1" > /etc/resolv.conf
|
||||
|
||||
or alternatively the ``resolvectl`` command can be used to configure the systemd-resolved.
|
||||
|
||||
.. note::
|
||||
In production environments you probably choose to use a different DNS
|
||||
domain for public Openstack endpoints. This is easy to achieve by setting
|
||||
the necessary chart values. All Openstack-Helm charts values have the
|
||||
``endpoints`` section where you can specify the ``host_fqdn_override``.
|
||||
In this case a chart will create additional ``Ingress`` resources to
|
||||
handle the external domain name and also the Keystone endpoint catalog
|
||||
will be updated.
|
||||
|
||||
Here is an example of how to set the ``host_fqdn_override`` for the Keystone chart:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
endpoints:
|
||||
identity:
|
||||
host_fqdn_override:
|
||||
public: "keystone.example.com"
|
||||
|
||||
Ceph
|
||||
----
|
||||
|
||||
Ceph is a highly scalable and fault-tolerant distributed storage
|
||||
system. It offers object storage, block storage, and
|
||||
file storage capabilities, making it a versatile solution for
|
||||
various storage needs.
|
||||
|
||||
Kubernetes CSI (Container Storage Interface) allows storage providers
|
||||
like Ceph to implement their drivers, so that Kubernetes can
|
||||
use the CSI driver to provision and manage volumes which can be
|
||||
used by stateful applications deployed on top of Kubernetes
|
||||
to store their data. In the context of OpenStack running in Kubernetes,
|
||||
the Ceph is used as a storage backend for services like MariaDB, RabbitMQ and
|
||||
other services that require persistent storage. By default OpenStack-Helm
|
||||
stateful sets expect to find a storage class named **general**.
|
||||
|
||||
At the same time, Ceph provides the RBD API, which applications
|
||||
can utilize directly to create and mount block devices distributed across
|
||||
the Ceph cluster. For example the OpenStack Cinder utilizes this Ceph
|
||||
capability to offer persistent block devices to virtual machines
|
||||
managed by the OpenStack Nova.
|
||||
|
||||
The recommended way to manage Ceph on top of Kubernetes is by means
|
||||
of the `Rook`_ operator. The Rook project provides the Helm chart
|
||||
to deploy the Rook operator which extends the Kubernetes API
|
||||
adding CRDs that enable managing Ceph clusters via Kuberntes custom objects.
|
||||
There is also another Helm chart that facilitates deploying Ceph clusters
|
||||
using Rook custom resources.
|
||||
|
||||
For details please refer to the `Rook`_ documentation and the `charts`_.
|
||||
|
||||
.. note::
|
||||
The following script `ceph-rook.sh`_ (recommended for testing only) can be used as
|
||||
an example of how to deploy the Rook Ceph operator and a Ceph cluster using the
|
||||
Rook `charts`_. Please note that the script places Ceph OSDs on loopback devices
|
||||
which is **not recommended** for production. The loopback devices must exist before
|
||||
using this script.
|
||||
|
||||
Once the Ceph cluster is deployed, the next step is to enable using it
|
||||
for services depoyed by OpenStack-Helm charts. The ``ceph-adapter-rook`` chart
|
||||
provides the necessary functionality to do this. The chart will
|
||||
prepare Kubernetes secret resources containing Ceph client keys/configs
|
||||
that are later used to interface with the Ceph cluster.
|
||||
|
||||
Here we assume the Ceph cluster is deployed in the ``ceph`` namespace.
|
||||
|
||||
The procedure consists of two steps: 1) gather necessary entities from the Ceph cluster
|
||||
2) copy them to the ``openstack`` namespace:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
tee > /tmp/ceph-adapter-rook-ceph.yaml <<EOF
|
||||
manifests:
|
||||
configmap_bin: true
|
||||
configmap_templates: true
|
||||
configmap_etc: false
|
||||
job_storage_admin_keys: true
|
||||
job_namespace_client_key: false
|
||||
job_namespace_client_ceph_config: false
|
||||
service_mon_discovery: true
|
||||
EOF
|
||||
|
||||
helm upgrade --install ceph-adapter-rook openstack-helm-infra/ceph-adapter-rook \
|
||||
--namespace=ceph \
|
||||
--values=/tmp/ceph-adapter-rook-ceph.yaml
|
||||
|
||||
helm osh wait-for-pods ceph
|
||||
|
||||
tee > /tmp/ceph-adapter-rook-openstack.yaml <<EOF
|
||||
manifests:
|
||||
configmap_bin: true
|
||||
configmap_templates: false
|
||||
configmap_etc: true
|
||||
job_storage_admin_keys: false
|
||||
job_namespace_client_key: true
|
||||
job_namespace_client_ceph_config: true
|
||||
service_mon_discovery: false
|
||||
EOF
|
||||
|
||||
helm upgrade --install ceph-adapter-rook openstack-helm-infra/ceph-adapter-rook \
|
||||
--namespace=openstack \
|
||||
--values=/tmp/ceph-adapter-rook-openstack.yaml
|
||||
|
||||
helm osh wait-for-pods openstack
|
||||
|
||||
.. _Rook: https://rook.io/
|
||||
.. _charts: https://rook.io/docs/rook/latest-release/Helm-Charts/helm-charts/
|
||||
.. _ceph-rook.sh: https://opendev.org/openstack/openstack-helm-infra/src/branch/master/tools/deployment/ceph/ceph-rook.sh
|
||||
|
||||
Node labels
|
||||
-----------
|
||||
|
||||
Openstack-Helm charts rely on Kubernetes node labels to determine which nodes
|
||||
are suitable for running specific OpenStack components.
|
||||
|
||||
The following sets labels on all the Kubernetes nodes in the cluster
|
||||
including control plane nodes but you can choose to label only a subset of nodes
|
||||
where you want to run OpenStack:
|
||||
|
||||
.. code-block::
|
||||
|
||||
kubectl label --overwrite nodes --all openstack-control-plane=enabled
|
||||
kubectl label --overwrite nodes --all openstack-compute-node=enabled
|
||||
kubectl label --overwrite nodes --all openvswitch=enabled
|
||||
kubectl label --overwrite nodes --all linuxbridge=enabled
|
||||
|
||||
.. note::
|
||||
The control plane nodes are tainted by default to prevent scheduling
|
||||
of pods on them. You can untaint the control plane nodes using the following command:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
kubectl taint nodes -l 'node-role.kubernetes.io/control-plane' node-role.kubernetes.io/control-plane-
|
@ -1,56 +0,0 @@
|
||||
Setup OpenStack client
|
||||
======================
|
||||
|
||||
The OpenStack client software is a crucial tool for interacting
|
||||
with OpenStack services. In certain OpenStack-Helm deployment
|
||||
scripts, the OpenStack client software is utilized to conduct
|
||||
essential checks during deployment. Therefore, installing the
|
||||
OpenStack client on the developer's machine is a vital step.
|
||||
|
||||
The script `setup-client.sh`_ can be used to setup the OpenStack
|
||||
client.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
cd ~/osh/openstack-helm
|
||||
./tools/deployment/common/setup-client.sh
|
||||
|
||||
Please keep in mind that the above script configures
|
||||
OpenStack client so it uses internal Kubernetes FQDNs like
|
||||
`keystone.openstack.svc.cluster.local`. In order to be able to resolve these
|
||||
internal names you have to configure the Kubernetes authoritative DNS server
|
||||
(CoreDNS) to work as a recursive resolver and then add its IP (`10.96.0.10` by default)
|
||||
to `/etc/resolv.conf`. This is only going to work when you try to access
|
||||
to OpenStack services from one of Kubernetes nodes because IPs from the
|
||||
Kubernetes service network are routed only between Kubernetes nodes.
|
||||
|
||||
If you wish to access OpenStack services from outside the Kubernetes cluster,
|
||||
you need to expose the OpenStack Ingress controller using an IP address accessible
|
||||
from outside the Kubernetes cluster, typically achieved through solutions like
|
||||
`MetalLB`_ or similar tools. In this scenario, you should also ensure that you
|
||||
have set up proper FQDN resolution to map to the external IP address and
|
||||
create the necessary Ingress objects for the associated FQDN.
|
||||
|
||||
It is also important to note that the above script does not actually installs
|
||||
the Openstack client package on the host but instead it creates a bash
|
||||
script `/usr/local/bin/openstack` that runs the Openstack client in a
|
||||
Docker container. If you need to pass extra command line parameters to the
|
||||
`docker run` command use the environment variable
|
||||
`OPENSTACK_CLIENT_CONTAINER_EXTRA_ARGS`. For example if you need to mount a
|
||||
directory from the host file system, you can do the following
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
export OPENSTACK_CLIENT_CONTAINER_EXTRA_ARGS="-v /data:/data"
|
||||
/usr/local/bin/openstack <subcommand> <options>
|
||||
|
||||
Remember that the container file system is ephemeral and is destroyed
|
||||
when you stop the container. So if you would like to use the
|
||||
Openstack client capabilities interfacing with the file system then you have to mount
|
||||
a directory from the host file system where you will read/write necessary files.
|
||||
For example, this is useful when you create a key pair and save the private key in a file
|
||||
which is then used for ssh access to VMs. Or it could be Heat recipes
|
||||
which you prepare in advance and then use with Openstack client.
|
||||
|
||||
.. _setup-client.sh: https://opendev.org/openstack/openstack-helm/src/branch/master/tools/deployment/common/setup-client.sh
|
||||
.. _MetalLB: https://metallb.universe.tf
|
Loading…
Reference in New Issue
Block a user