openstack-helm/doc/source/install/multinode.rst
Chinasubbareddy Mallavarapu a385c18176 [CEPH] OSH: use loopback devices for ceph osds.
- This is to make use of loopback devices for ceph osds since
support for directory backed osds going to depricate.

- Move to bluestore from filestore for ceph-osds.

Change-Id: Ia95c9ceb81f7d253dd582a2e753a6ed8fe60a04d
2020-06-30 17:05:22 -05:00

10 KiB

Multinode

Overview

In order to drive towards a production-ready OpenStack solution, our goal is to provide containerized, yet stable persistent volumes that Kubernetes can use to schedule applications that require state, such as MariaDB (Galera). Although we assume that the project should provide a "batteries included" approach towards persistent storage, we want to allow operators to define their own solution as well. Examples of this work will be documented in another section, however evidence of this is found throughout the project. If you find any issues or gaps, please create a story to track what can be done to improve our documentation.

Note

Please see the supported application versions outlined in the source variable file.

Other versions and considerations (such as other CNI SDN providers), config map data, and value overrides will be included in other documentation as we explore these options further.

The installation procedures below, will take an administrator from a new kubeadm installation to OpenStack-Helm deployment.

Note

Many of the default container images that are referenced across OpenStack-Helm charts are not intended for production use; for example, while LOCI and Kolla can be used to produce production-grade images, their public reference images are not prod-grade. In addition, some of the default images use latest or master tags, which are moving targets and can lead to unpredictable behavior. For production-like deployments, we recommend building custom images, or at minimum caching a set of known images, and incorporating them into OpenStack-Helm via values overrides.

Warning

Until the Ubuntu kernel shipped with 16.04 supports CephFS subvolume mounts by default the HWE Kernel is required to use CephFS.

Kubernetes Preparation

You can use any Kubernetes deployment tool to bring up a working Kubernetes cluster for use with OpenStack-Helm. For production deployments, please choose (and tune appropriately) a highly-resilient Kubernetes distribution, e.g.:

  • Airship, a declarative open cloud infrastructure platform
  • KubeADM, the foundation of a number of Kubernetes installation solutions

For a lab or proof-of-concept environment, the OpenStack-Helm gate scripts can be used to quickly deploy a multinode Kubernetes cluster using KubeADM and Ansible. Please refer to the deployment guide here.

Managing and configuring a Kubernetes cluster is beyond the scope of OpenStack-Helm and this guide.

Deploy OpenStack-Helm

Note

The following commands all assume that they are run from the /opt/openstack-helm directory.

Setup Clients on the host and assemble the charts

The OpenStack clients and Kubernetes RBAC rules, along with assembly of the charts can be performed by running the following commands:

../../../tools/deployment/multinode/010-setup-client.sh

Alternatively, this step can be performed by running the script directly:

./tools/deployment/multinode/010-setup-client.sh

Deploy the ingress controller

export OSH_DEPLOY_MULTINODE=True

../../../tools/deployment/component/common/ingress.sh

Alternatively, this step can be performed by running the script directly:

OSH_DEPLOY_MULTINODE=True ./tools/deployment/component/common/ingress.sh

Create loopback devices for CEPH

Create two loopback devices for ceph as one disk for OSD data and other disk for block DB and block WAL.

ansible all -i /opt/openstack-helm-infra/tools/gate/devel/multinode-inventory.yaml -m shell -s -a "/opt/openstack-helm/tools/deployment/common/setup-ceph-loopback-device.sh"

Deploy Ceph

The script below configures Ceph to use loopback devices created in previous step as backend for ceph osds. To configure a custom block device-based backend, please refer to the ceph-osd values.yaml.

Additional information on Kubernetes Ceph-based integration can be found in the documentation for the CephFS and RBD storage provisioners, as well as for the alternative NFS provisioner.

Warning

The upstream Ceph image repository does not currently pin tags to specific Ceph point releases. This can lead to unpredictable results in long-lived deployments. In production scenarios, we strongly recommend overriding the Ceph images to use either custom built images or controlled, cached images.

Note

The ./tools/deployment/multinode/kube-node-subnet.sh script requires docker to run.

../../../tools/deployment/multinode/030-ceph.sh

Alternatively, this step can be performed by running the script directly:

./tools/deployment/multinode/030-ceph.sh

Activate the openstack namespace to be able to use Ceph

../../../tools/deployment/multinode/040-ceph-ns-activate.sh

Alternatively, this step can be performed by running the script directly:

./tools/deployment/multinode/040-ceph-ns-activate.sh

Deploy MariaDB

../../../tools/deployment/multinode/050-mariadb.sh

Alternatively, this step can be performed by running the script directly:

./tools/deployment/multinode/050-mariadb.sh

Deploy RabbitMQ

../../../tools/deployment/multinode/060-rabbitmq.sh

Alternatively, this step can be performed by running the script directly:

./tools/deployment/multinode/060-rabbitmq.sh

Deploy Memcached

../../../tools/deployment/multinode/070-memcached.sh

Alternatively, this step can be performed by running the script directly:

./tools/deployment/multinode/070-memcached.sh

Deploy Keystone

../../../tools/deployment/multinode/080-keystone.sh

Alternatively, this step can be performed by running the script directly:

./tools/deployment/multinode/080-keystone.sh

Deploy Rados Gateway for object store

../../../tools/deployment/multinode/090-ceph-radosgateway.sh

Alternatively, this step can be performed by running the script directly:

./tools/deployment/multinode/090-ceph-radosgateway.sh

Deploy Glance

../../../tools/deployment/multinode/100-glance.sh

Alternatively, this step can be performed by running the script directly:

./tools/deployment/multinode/100-glance.sh

Deploy Cinder

../../../tools/deployment/multinode/110-cinder.sh

Alternatively, this step can be performed by running the script directly:

./tools/deployment/multinode/110-cinder.sh

Deploy OpenvSwitch

../../../tools/deployment/multinode/120-openvswitch.sh

Alternatively, this step can be performed by running the script directly:

./tools/deployment/multinode/120-openvswitch.sh

Deploy Libvirt

../../../tools/deployment/multinode/130-libvirt.sh

Alternatively, this step can be performed by running the script directly:

./tools/deployment/multinode/130-libvirt.sh

Deploy Compute Kit (Nova and Neutron)

../../../tools/deployment/multinode/140-compute-kit.sh

Alternatively, this step can be performed by running the script directly:

./tools/deployment/multinode/140-compute-kit.sh

Deploy Heat

../../../tools/deployment/multinode/150-heat.sh

Alternatively, this step can be performed by running the script directly:

./tools/deployment/multinode/150-heat.sh

Deploy Barbican

../../../tools/deployment/multinode/160-barbican.sh

Alternatively, this step can be performed by running the script directly:

./tools/deployment/multinode/160-barbican.sh

Configure OpenStack

Configuring OpenStack for a particular production use-case is beyond the scope of this guide. Please refer to the OpenStack Configuration documentation for your selected version of OpenStack to determine what additional values overrides should be provided to the OpenStack-Helm charts to ensure appropriate networking, security, etc. is in place.