Juju Charm - Magnum
Go to file
Corey Bryant f84af75285 Drop setuptools pinning in wheelhouse.txt
This should fix the lunar install hook failure described
in the linked bug.

Closes-Bug: #2037125
Change-Id: I47fd26a0def7bb8f0f66eb2e5e2b4853bf2b57d7
2023-09-22 14:17:44 -04:00
src Drop setuptools pinning in wheelhouse.txt 2023-09-22 14:17:44 -04:00
unit_tests Add 2023.2 Bobcat support 2023-08-02 14:13:36 -04:00
.gitignore Updates to enable jammy and finalise charmcraft builds 2022-04-07 17:08:49 +01:00
.gitreview Add .gitreview and .zuul.yaml files 2021-03-23 11:11:56 +01:00
.stestr.conf Add unit_tests 2021-03-11 11:01:29 +02:00
.zuul.yaml Add Antelope support 2023-03-07 23:59:44 +00:00
bindep.txt Add Kinetic and Zed support 2022-08-31 20:10:52 +01:00
charmcraft.yaml Add 2023.2 Bobcat support 2023-08-02 14:13:36 -04:00
LICENSE Initial commit 2021-02-24 18:10:56 +02:00
metadata.yaml Migrate charm to charmhub latest/edge track 2022-01-27 20:35:31 +00:00
osci.yaml Add 2023.2 Bobcat support 2023-08-02 14:13:36 -04:00
README.md Update README.md 2021-03-10 11:53:57 +02:00
rebuild Ensure get_requests_for_local_unit doesn't fail on incomplete relation 2023-08-14 11:00:44 +01:00
rename.sh Update to build using charmcraft 2022-02-01 20:19:28 +00:00
requirements.txt Add Kinetic and Zed support 2022-08-31 20:10:52 +01:00
test-requirements.txt Add Kinetic and Zed support 2022-08-31 20:10:52 +01:00
tox.ini Add Antelope support 2023-03-07 23:59:44 +00:00

Overview

This charm provides the Magnum service for an OpenStack Cloud.

OpenStack Ussuri or later is required.

Usage

Magnum and the Magnum charm relies on services from a fully functional OpenStack Cloud and expects to be able to consume images from glance, consume certificate secrets from Barbican (preferably utilizing a Vault backend) and spin up Kubernetes clusters with Heat. Magnum requires the existence of the other core OpenStack services deployed via Juju charms, specifically: mysql, rabbitmq-server, keystone and nova-cloud-controller. The following assumes these services have already been deployed.

Required configuration

After deployment of the cloud, the domain-setup action must be run to configure required domains, roles and users in the cloud for Magnum clusters.

juju run-action magnum/0 domain-setup

Magnum generates and maintains a certificate for each cluster so that it can also communicate securely with the cluster. As a result, it is necessary to store the certificates in a secure manner. Magnum provides the following methods for storing the certificates and this is configured in /etc/magnum/magnum.conf in the section [certificates] with the parameter cert_manager_type Valid values are : [barbican, x509keypair, local]

trustee-domain - Domain used for COE

trustee-admin - Domain admin for the trustee-domain

Deploy a Kubernetes cluster

When Magnum deploys a Kubernetes cluster, it uses parameters defined in the ClusterTemplate and specified on the cluster-create command, for example:

openstack coe cluster template create k8s-cluster-template \
                           --image fedora-coreos-latest \
                           --keypair testkey \
                           --external-network public \
                           --dns-nameserver 8.8.8.8 \
                           --flavor m1.small \
                           --docker-volume-size 5 \
                           --network-driver flannel \
                           --coe kubernetes
openstack coe cluster create k8s-cluster \
                      --cluster-template k8s-cluster-template \
                      --master-count 3 \
                      --node-count 8

Refer to the ClusterTemplate and Cluster sections for the full list of parameters.

This section covers common and/or important configuration options. See file config.yaml for the full list of options, along with their descriptions and default values. See the Juju documentation for details on configuring applications.

High availability

When more than one unit is deployed with the hacluster application the charm will bring up an HA active/active cluster.

There are two mutually exclusive high availability options: using virtual IP(s) or DNS. In both cases the hacluster subordinate charm is used to provide the Corosync and Pacemaker backend HA functionality.

See Infrastructure high availability in the OpenStack Charms Deployment Guide for details.

Bugs

Please report bugs on Launchpad.

For general charm questions refer to the OpenStack Charm Guide.