openstack-helm/doc/source/install/multinode.rst
mattmceuen 602358403d Updates Launchpad references to Storyboard
Updates references to Launchpad to now point to Storyboard, since OSH has
migrated to that platform.  "Report a bug" type references are updated to
point straight to the OSH project proper in Storyboard, while general backlog
type references are updated to point to the OSH Project Group.

Change-Id: I63e290f34763847e71e14e4471e7ee9460d8cff9
Task: 21684
Story: 2002199
2018-06-09 20:00:11 -05:00

11 KiB

Multinode

Overview

In order to drive towards a production-ready OpenStack solution, our goal is to provide containerized, yet stable persistent volumes that Kubernetes can use to schedule applications that require state, such as MariaDB (Galera). Although we assume that the project should provide a "batteries included" approach towards persistent storage, we want to allow operators to define their own solution as well. Examples of this work will be documented in another section, however evidence of this is found throughout the project. If you find any issues or gaps, please create a story to track what can be done to improve our documentation.

Note

Please see the supported application versions outlined in the source variable file.

Other versions and considerations (such as other CNI SDN providers), config map data, and value overrides will be included in other documentation as we explore these options further.

The installation procedures below, will take an administrator from a new kubeadm installation to Openstack-Helm deployment.

Warning

Until the Ubuntu kernel shipped with 16.04 supports CephFS subvolume mounts by default the HWE Kernel is required to use CephFS.

Kubernetes Preparation

You can use any Kubernetes deployment tool to bring up a working Kubernetes cluster for use with OpenStack-Helm. For simplicity however we will describe deployment using the OpenStack-Helm gate scripts to bring up a reference cluster using KubeADM and Ansible.

OpenStack-Helm Infra KubeADM deployment

Note

Throughout this guide the assumption is that the user is: ubuntu. Because this user has to execute root level commands remotely to other nodes, it is advised to add the following lines to /etc/suders for each node:

root ALL=(ALL) NOPASSWD: ALL

ubuntu ALL=(ALL) NOPASSWD: ALL

On the master node install the latest versions of Git, CA Certs & Make if necessary

../../../tools/deployment/developer/common/000-install-packages.sh

On the worker nodes

#!/bin/bash
set -xe
sudo apt-get update
sudo apt-get install --no-install-recommends -y git

SSH-Key preparation

Create an ssh-key on the master node, and add the public key to each node that you intend to join the cluster.

Note

1. To generate the key you can use ssh-keygen -t rsa 2. To copy the ssh key to each node, this can be accomplished with the ssh-copy-id command, for example: ssh-copy-id ubuntu@192.168.122.178 3. Copy the key: sudo cp ~/.ssh/id_rsa /etc/openstack-helm/deploy-key.pem 4. Set correct ownership: sudo chown ubuntu /etc/openstack-helm/deploy-key.pem

Test this by ssh'ing to a node and then executing a command with 'sudo'. Neither operation should require a password.

Clone the OpenStack-Helm Repos

Once the host has been configured the repos containing the OpenStack-Helm charts should be cloned onto each node in the cluster:

#!/bin/bash
set -xe

sudo chown -R ubuntu: /opt
git clone https://git.openstack.org/openstack/openstack-helm-infra.git /opt/openstack-helm-infra
git clone https://git.openstack.org/openstack/openstack-helm.git /opt/openstack-helm

Create an inventory file

On the master node create an inventory file for the cluster:

Note

node_one, node_two and node_three below are all worker nodes, children of the master node that the commands below are executed on.

#!/bin/bash
set -xe
cat > /opt/openstack-helm-infra/tools/gate/devel/multinode-inventory.yaml <<EOF
all:
  children:
    primary:
      hosts:
        node_one:
          ansible_port: 22
          ansible_host: $node_one_ip
          ansible_user: ubuntu
          ansible_ssh_private_key_file: /etc/openstack-helm/deploy-key.pem
          ansible_ssh_extra_args: -o StrictHostKeyChecking=no
    nodes:
      hosts:
        node_two:
          ansible_port: 22
          ansible_host: $node_two_ip
          ansible_user: ubuntu
          ansible_ssh_private_key_file: /etc/openstack-helm/deploy-key.pem
          ansible_ssh_extra_args: -o StrictHostKeyChecking=no
        node_three:
          ansible_port: 22
          ansible_host: $node_three_ip
          ansible_user: ubuntu
          ansible_ssh_private_key_file: /etc/openstack-helm/deploy-key.pem
          ansible_ssh_extra_args: -o StrictHostKeyChecking=no
EOF

Create an environment file

On the master node create an environment file for the cluster:

#!/bin/bash
set -xe
function net_default_iface {
 sudo ip -4 route list 0/0 | awk '{ print $5; exit }'
}
cat > /opt/openstack-helm-infra/tools/gate/devel/multinode-vars.yaml <<EOF
kubernetes_network_default_device: $(net_default_iface)
EOF

Note

This installation, by default will use Google DNS servers, 8.8.8.8 or 8.8.4.4 and updates resolv.conf. These DNS nameserver entries can be changed by updating file /openstack-helm-infra/tools/images/kubeadm-aio/assets/opt/playbooks/vars.yaml under section external_dns_nameservers. This change must be done on each node in your cluster.

Run the playbooks

On the master node run the playbooks:

#!/bin/bash
set -xe
cd /opt/openstack-helm-infra
make dev-deploy setup-host multinode
make dev-deploy k8s multinode

Deploy OpenStack-Helm

Note

The following commands all assume that they are run from the /opt/openstack-helm directory.

Setup Clients on the host and assemble the charts

The OpenStack clients and Kubernetes RBAC rules, along with assembly of the charts can be performed by running the following commands:

../../../tools/deployment/multinode/010-setup-client.sh

Alternatively, this step can be performed by running the script directly:

./tools/deployment/multinode/010-setup-client.sh

Deploy the ingress controller

../../../tools/deployment/multinode/020-ingress.sh

Alternatively, this step can be performed by running the script directly:

./tools/deployment/multinode/020-ingress.sh

Deploy Ceph

Note

The ./tools/deployment/multinode/kube-node-subnet.sh script requires docker to run.

../../../tools/deployment/multinode/030-ceph.sh

Alternatively, this step can be performed by running the script directly:

./tools/deployment/multinode/030-ceph.sh

Activate the openstack namespace to be able to use Ceph

../../../tools/deployment/multinode/040-ceph-ns-activate.sh

Alternatively, this step can be performed by running the script directly:

./tools/deployment/multinode/040-ceph-ns-activate.sh

Deploy MariaDB

../../../tools/deployment/multinode/050-mariadb.sh

Alternatively, this step can be performed by running the script directly:

./tools/deployment/multinode/050-mariadb.sh

Deploy RabbitMQ

../../../tools/deployment/multinode/060-rabbitmq.sh

Alternatively, this step can be performed by running the script directly:

./tools/deployment/multinode/060-rabbitmq.sh

Deploy Memcached

../../../tools/deployment/multinode/070-memcached.sh

Alternatively, this step can be performed by running the script directly:

./tools/deployment/multinode/070-memcached.sh

Deploy Keystone

../../../tools/deployment/multinode/080-keystone.sh

Alternatively, this step can be performed by running the script directly:

./tools/deployment/multinode/080-keystone.sh

Create Ceph endpoints and service account for use with keystone

../../../tools/deployment/multinode/090-ceph-radosgateway.sh

Alternatively, this step can be performed by running the script directly:

./tools/deployment/multinode/090-ceph-radosgateway.sh

Deploy Glance

../../../tools/deployment/multinode/100-glance.sh

Alternatively, this step can be performed by running the script directly:

./tools/deployment/multinode/100-glance.sh

Deploy Cinder

../../../tools/deployment/multinode/110-cinder.sh

Alternatively, this step can be performed by running the script directly:

./tools/deployment/multinode/110-cinder.sh

Deploy OpenvSwitch

../../../tools/deployment/multinode/120-openvswitch.sh

Alternatively, this step can be performed by running the script directly:

./tools/deployment/multinode/120-openvswitch.sh

Deploy Libvirt

../../../tools/deployment/multinode/130-libvirt.sh

Alternatively, this step can be performed by running the script directly:

./tools/deployment/multinode/130-libvirt.sh

Deploy Compute Kit (Nova and Neutron)

../../../tools/deployment/multinode/140-compute-kit.sh

Alternatively, this step can be performed by running the script directly:

./tools/deployment/multinode/140-compute-kit.sh

Deploy Heat

../../../tools/deployment/multinode/150-heat.sh

Alternatively, this step can be performed by running the script directly:

./tools/deployment/multinode/150-heat.sh

Deploy Barbican

../../../tools/deployment/multinode/160-barbican.sh

Alternatively, this step can be performed by running the script directly:

./tools/deployment/multinode/160-barbican.sh