openstack-helm/doc/source/install/kubernetes.rst
Vladimir Kozhukalov fb2731b762 [doc] Fix inventory.yaml example in user doc
By default the deploy-env role sets up ssh key to make it possible
to connect to the k8s master node via ssh without a password.
The default user is zuul which does not correspond to envs
of most users.

Change-Id: Id40f0870a1938e68c99ad24577b3b6fba1afb5b9
2024-11-13 20:07:24 -06:00

7.0 KiB

Kubernetes

OpenStack-Helm provides charts that can be deployed on any Kubernetes cluster if it meets the version requirements </readme>. However, deploying the Kubernetes cluster itself is beyond the scope of OpenStack-Helm.

You can use any Kubernetes deployment tool for this purpose. In this guide, we detail how to set up a Kubernetes cluster using Kubeadm and Ansible. While not production-ready, this cluster is ideal as a starting point for lab or proof-of-concept environments.

All OpenStack projects test their code through an infrastructure managed by the CI tool, Zuul, which executes Ansible playbooks on one or more test nodes. Therefore, we employ Ansible roles/playbooks to install required packages, deploy Kubernetes, and then execute tests on it.

To establish a test environment, the Ansible role deploy-env is employed. This role deploys a basic single/multi-node Kubernetes cluster, used to prove the functionality of commonly used deployment configurations. The role is compatible with Ubuntu Focal and Ubuntu Jammy distributions.

Note

The role deploy-env is not idempotent and assumed to be applied to a clean environment.

Clone roles git repositories

Before proceeding with the steps outlined in the following sections, it is imperative that you clone the git repositories containing the required Ansible roles.

mkdir ~/osh
cd ~/osh
git clone https://opendev.org/openstack/openstack-helm-infra.git
git clone https://opendev.org/zuul/zuul-jobs.git

Install Ansible

pip install ansible

Set roles lookup path

Now let's set the environment variable ANSIBLE_ROLES_PATH which specifies where Ansible will lookup roles

export ANSIBLE_ROLES_PATH=~/osh/openstack-helm-infra/roles:~/osh/zuul-jobs/roles

To avoid setting it every time when you start a new terminal instance you can define this in the Ansible configuration file. Please see the Ansible documentation.

Prepare inventory

The example below assumes that there are four nodes which must be available via SSH using the public key authentication and a ssh user (let say ubuntu) must have passwordless sudo on the nodes.

cat > ~/osh/inventory.yaml <<EOF
---
all:
  vars:
    ansible_port: 22
    ansible_user: ubuntu
    ansible_ssh_private_key_file: /home/ubuntu/.ssh/id_rsa
    ansible_ssh_extra_args: -o StrictHostKeyChecking=no
    # The user and group that will be used to run Kubectl and Helm commands.
    kubectl:
      user: ubuntu
      group: ubuntu
    # The user and group that will be used to run Docker commands.
    docker_users:
      - ununtu
    # By default the deploy-env role sets up ssh key to make it possible
    # to connect to the k8s master node via ssh without a password.
    client_ssh_user: ubuntu
    cluster_ssh_user: ubuntu
    # The MetalLB controller will be installed on the Kubernetes cluster.
    metallb_setup: true
    # Loopback devices will be created on all cluster nodes which then can be used
    # to deploy a Ceph cluster which requires block devices to be provided.
    # Please use loopback devices only for testing purposes. They are not suitable
    # for production due to performance reasons.
    loopback_setup: true
    loopback_device: /dev/loop100
    loopback_image: /var/lib/openstack-helm/ceph-loop.img
    loopback_image_size: 12G
  children:
    # The primary node where Kubectl and Helm will be installed. If it is
    # the only node then it must be a member of the groups k8s_cluster and
    # k8s_control_plane. If there are more nodes then the wireguard tunnel
    # will be established between the primary node and the k8s_control_plane node.
    primary:
      hosts:
        primary:
          ansible_host: 10.10.10.10
    # The nodes where the Kubernetes components will be installed.
    k8s_cluster:
      hosts:
        node-1:
          ansible_host: 10.10.10.11
        node-2:
          ansible_host: 10.10.10.12
        node-3:
          ansible_host: 10.10.10.13
    # The control plane node where the Kubernetes control plane components will be installed.
    # It must be the only node in the group k8s_control_plane.
    k8s_control_plane:
      hosts:
        node-1:
          ansible_host: 10.10.10.11
    # These are Kubernetes worker nodes. There could be zero such nodes.
    # In this case the Openstack workloads will be deployed on the control plane node.
    k8s_nodes:
      hosts:
        node-2:
          ansible_host: 10.10.10.12
        node-3:
          ansible_host: 10.10.10.13
EOF

Note

If you would like to set up a Kubernetes cluster on the local host, configure the Ansible inventory to designate the primary node as the local host. For further guidance, please refer to the Ansible documentation.

Note

The full list of variables that you can define in the inventory file can be found in the file deploy-env/defaults/main.yaml.

Prepare playbook

Create an Ansible playbook that will deploy the environment

cat > ~/osh/deploy-env.yaml <<EOF
---
- hosts: all
  become: true
  gather_facts: true
  roles:
    - ensure-python
    - ensure-pip
    - clear-firewall
    - deploy-env
EOF

Run the playbook

cd ~/osh
ansible-playbook -i inventory.yaml deploy-env.yaml

The playbook only changes the state of the nodes listed in the inventory file.

It installs necessary packages, deploys and configures Containerd and Kubernetes. For details please refer to the role deploy-env and other roles (ensure-python, ensure-pip, clear-firewall) used in the playbook.

Note

The role deploy-env configures cluster nodes to use Google DNS servers (8.8.8.8).

By default, it also configures internal Kubernetes DNS server (Coredns) to work as a recursive DNS server and adds its IP address (10.96.0.10 by default) to the /etc/resolv.conf file.

Processes running on the cluster nodes will be able to resolve internal Kubernetes domain names *.svc.cluster.local.