Kubernetes deployment artifacts for Canonical's MaaS.
Go to file
Phil Sphicas 03ebbaaca3 maas-rack image: improve IPMI error handling
MAAS uses ipmipower to power nodes on and off. The ipmipower command
sometimes fails with the message:
    x.x.x.x: BMC error
but exits with return code of 0.

Because MAAS is not aware of this specific error, and is also not able
to detect the power state from the output, the built in retry logic is
not used, resulting in failed deployments or nodes in Power Error.

This patch adds "BMC error" to the list of known retriable errors, and
also adds additional retries.

Error strings from ipmipower 1.4:
http://git.savannah.gnu.org/cgit/freeipmi.git/tree/ipmipower/ipmipower_output.c?h=Release-1_4_0_branch#n52

IPMI_ERRORS known to MAAS 2.3:
https://git.launchpad.net/maas/tree/src/provisioningserver/drivers/power/ipmi.py?h=2.3#n50

Change-Id: Ia4b10e47855d67ba81e9ab800be3d780f8b38cac
2019-11-29 09:42:45 -08:00
charts/maas MAAS chart: reduce syslog startup spam 2019-11-17 06:22:29 +00:00
images maas-rack image: improve IPMI error handling 2019-11-29 09:42:45 -08:00
tools Move to helm 2.14.0 2019-05-21 13:50:28 -05:00
.gitignore Add .gitignore to repo 2018-08-22 21:24:27 -05:00
.gitreview OpenDev Migration Patch 2019-04-19 19:52:21 +00:00
.zuul.yaml Encrypt git mirroring ssh_key to specific project 2019-05-23 13:05:12 -05:00
LICENSE Initial commit 2017-10-19 11:42:23 -05:00
Makefile (cache) Configurable Ubuntu release 2019-09-27 19:19:46 +00:00
README.md Add image cache sidecar 2017-12-04 12:50:30 -06:00

MaaS Helm Artifacts

This repository holds artifacts supporting the deployment of Canonical MaaS in a Kubernetes cluster.

Images

The MaaS install is made up of two required imags and one optional image. The Dockerfiles in this repo can be used to build all three. These images are intended to be deployed via a Kubernetes Helm chart.

MaaS Region Controller

The regiond Dockerfile builds a systemD-based Docker image to run the MaaS Region API server and metadata server.

MaaS Rack Controller

The rackd Dockerfile builds a systemD-based Docker image to run the MaaS Rack controller and dependent services (DHCPd, TFTPd, etc...). This image needs to be run in privileged host networking mode to function.

MaaS Image Cache

The cache image Dockerfile simply provides a point-in-time mirror of the maas.io image repository so that if you are deploying MaaS somewhere without network connectivity, you have a local copy of Ubuntu. Currently this only mirrors Ubuntu 16.04 Xenial and does not update the mirror after image creation.

Charts

Also provided is a Kubernetes Helm chart to deploy the MaaS pieces and integrates them. This chart depends on a previous deployment of Postgres. The recommended avenue for this is the Openstack Helm Postgres chart but any Postgres instance should work.

Overrides

Chart overrides are likely required to deploy MaaS into your environment

  • values.labels.rack.node_selector_key - This is the Kubernetes label key for selecting nodes to deploy the rack controller
  • values.labels.rack.node_selector_value - This is the Kubernetges label value for selecting nodes to deploy the rack controller
  • values.labels.region.node_selector_key - this is the Kubernetes label key for selecting nodes to deploy the region controller
  • values.labels.region.node_selector_value - This is the Kubernetes label value for selecting nodes to deploy the region controller
  • values.conf.cache.enabled - Boolean on whether to use the repo cache image in the deployment
  • values.conf.maas.url.maas_url - The URL rack controllers and nodes should use for accessing the region API (e.g. http://10.10.10.10:8080/MAAS)

Deployment Flow

During deployment, the chart executes the below steps:

  1. Initializes the Postgres DB for MaaS
  2. Starts a Pod with the region controller and optionally the image cache sidecar container
  3. Once the region controller is running, deploy a Pod with the rack controller and join it to the region controller.
  4. Initialize the configuration of MaaS and start the image sync
  5. Export an API key into a Kubernetes secret so other Pods can access the API if needed