Juju Charm - Magnum
Go to file
James Page 6b780bbfe4
Sync/rebuild for Dalmatian/Epoxy updates
Refresh and rebuild charm for awareness of Dalmatian and Epoxy
Cloud Archive releases.

Change-Id: I3c02aec46ecde7cd7feac62cb779f69751d6f2ca
2024-11-15 12:15:48 +00:00
src Add configurable notification-driver for oslo messaging in Magnum charm 2024-10-25 17:18:17 +03:00
unit_tests Add configurable notification-driver for oslo messaging in Magnum charm 2024-10-25 17:18:17 +03:00
.gitignore Updates to enable jammy and finalise charmcraft builds 2022-04-07 17:08:49 +01:00
.gitreview Add .gitreview and .zuul.yaml files 2021-03-23 11:11:56 +01:00
.stestr.conf Add unit_tests 2021-03-11 11:01:29 +02:00
.zuul.yaml Add charmcraft 3 support 2024-09-04 18:23:38 +00:00
bindep.txt Add Kinetic and Zed support 2022-08-31 20:10:52 +01:00
charmcraft.yaml Add charmcraft 3 support 2024-09-04 18:23:38 +00:00
LICENSE Initial commit 2021-02-24 18:10:56 +02:00
metadata.yaml Migrate charm to charmhub latest/edge track 2022-01-27 20:35:31 +00:00
osci.yaml Add charmcraft 3 support 2024-09-04 18:23:38 +00:00
README.md Update README.md 2021-03-10 11:53:57 +02:00
rebuild Sync/rebuild for Dalmatian/Epoxy updates 2024-11-15 12:15:48 +00:00
rename.sh Update to build using charmcraft 2022-02-01 20:19:28 +00:00
requirements.txt Fix issues related to py312 compatibility 2024-08-21 14:21:30 +00:00
test-requirements.txt Fix issues related to py312 compatibility 2024-08-21 14:21:30 +00:00
tox.ini Add charmcraft 3 support 2024-09-04 18:23:38 +00:00

Overview

This charm provides the Magnum service for an OpenStack Cloud.

OpenStack Ussuri or later is required.

Usage

Magnum and the Magnum charm relies on services from a fully functional OpenStack Cloud and expects to be able to consume images from glance, consume certificate secrets from Barbican (preferably utilizing a Vault backend) and spin up Kubernetes clusters with Heat. Magnum requires the existence of the other core OpenStack services deployed via Juju charms, specifically: mysql, rabbitmq-server, keystone and nova-cloud-controller. The following assumes these services have already been deployed.

Required configuration

After deployment of the cloud, the domain-setup action must be run to configure required domains, roles and users in the cloud for Magnum clusters.

juju run-action magnum/0 domain-setup

Magnum generates and maintains a certificate for each cluster so that it can also communicate securely with the cluster. As a result, it is necessary to store the certificates in a secure manner. Magnum provides the following methods for storing the certificates and this is configured in /etc/magnum/magnum.conf in the section [certificates] with the parameter cert_manager_type Valid values are : [barbican, x509keypair, local]

trustee-domain - Domain used for COE

trustee-admin - Domain admin for the trustee-domain

Deploy a Kubernetes cluster

When Magnum deploys a Kubernetes cluster, it uses parameters defined in the ClusterTemplate and specified on the cluster-create command, for example:

openstack coe cluster template create k8s-cluster-template \
                           --image fedora-coreos-latest \
                           --keypair testkey \
                           --external-network public \
                           --dns-nameserver 8.8.8.8 \
                           --flavor m1.small \
                           --docker-volume-size 5 \
                           --network-driver flannel \
                           --coe kubernetes
openstack coe cluster create k8s-cluster \
                      --cluster-template k8s-cluster-template \
                      --master-count 3 \
                      --node-count 8

Refer to the ClusterTemplate and Cluster sections for the full list of parameters.

This section covers common and/or important configuration options. See file config.yaml for the full list of options, along with their descriptions and default values. See the Juju documentation for details on configuring applications.

High availability

When more than one unit is deployed with the hacluster application the charm will bring up an HA active/active cluster.

There are two mutually exclusive high availability options: using virtual IP(s) or DNS. In both cases the hacluster subordinate charm is used to provide the Corosync and Pacemaker backend HA functionality.

See Infrastructure high availability in the OpenStack Charms Deployment Guide for details.

Bugs

Please report bugs on Launchpad.

For general charm questions refer to the OpenStack Charm Guide.