========= Multinode ========= Overview ======== In order to drive towards a production-ready OpenStack solution, our goal is to provide containerized, yet stable `persistent volumes `_ that Kubernetes can use to schedule applications that require state, such as MariaDB (Galera). Although we assume that the project should provide a "batteries included" approach towards persistent storage, we want to allow operators to define their own solution as well. Examples of this work will be documented in another section, however evidence of this is found throughout the project. If you find any issues or gaps, please create a `story `_ to track what can be done to improve our documentation. .. note:: Please see the supported application versions outlined in the `source variable file `_. Other versions and considerations (such as other CNI SDN providers), config map data, and value overrides will be included in other documentation as we explore these options further. The installation procedures below, will take an administrator from a new ``kubeadm`` installation to Openstack-Helm deployment. .. warning:: Until the Ubuntu kernel shipped with 16.04 supports CephFS subvolume mounts by default the `HWE Kernel <../troubleshooting/ubuntu-hwe-kernel.rst>`__ is required to use CephFS. Kubernetes Preparation ====================== You can use any Kubernetes deployment tool to bring up a working Kubernetes cluster for use with OpenStack-Helm. For simplicity however we will describe deployment using the OpenStack-Helm gate scripts to bring up a reference cluster using KubeADM and Ansible. OpenStack-Helm Infra KubeADM deployment --------------------------------------- .. note:: Throughout this guide the assumption is that the user is: ``ubuntu``. Because this user has to execute root level commands remotely to other nodes, it is advised to add the following lines to ``/etc/suders`` for each node: ``root ALL=(ALL) NOPASSWD: ALL`` ``ubuntu ALL=(ALL) NOPASSWD: ALL`` On the master node install the latest versions of Git, CA Certs & Make if necessary .. literalinclude:: ../../../tools/deployment/developer/common/000-install-packages.sh :language: shell :lines: 1,17- On the worker nodes .. code-block:: shell #!/bin/bash set -xe sudo apt-get update sudo apt-get install --no-install-recommends -y git SSH-Key preparation ------------------- Create an ssh-key on the master node, and add the public key to each node that you intend to join the cluster. .. note:: 1. To generate the key you can use ``ssh-keygen -t rsa`` 2. To copy the ssh key to each node, this can be accomplished with the ``ssh-copy-id`` command, for example: *ssh-copy-id ubuntu@192.168.122.178* 3. Copy the key: ``sudo cp ~/.ssh/id_rsa /etc/openstack-helm/deploy-key.pem`` 4. Set correct ownership: ``sudo chown ubuntu /etc/openstack-helm/deploy-key.pem`` Test this by ssh'ing to a node and then executing a command with 'sudo'. Neither operation should require a password. Clone the OpenStack-Helm Repos ------------------------------ Once the host has been configured the repos containing the OpenStack-Helm charts should be cloned onto each node in the cluster: .. code-block:: shell #!/bin/bash set -xe sudo chown -R ubuntu: /opt git clone https://git.openstack.org/openstack/openstack-helm-infra.git /opt/openstack-helm-infra git clone https://git.openstack.org/openstack/openstack-helm.git /opt/openstack-helm Create an inventory file ------------------------ On the master node create an inventory file for the cluster: .. note:: node_one, node_two and node_three below are all worker nodes, children of the master node that the commands below are executed on. .. code-block:: shell #!/bin/bash set -xe cat > /opt/openstack-helm-infra/tools/gate/devel/multinode-inventory.yaml < /opt/openstack-helm-infra/tools/gate/devel/multinode-vars.yaml <