
NOTE: This has become a monolithic commit to get gate settings/scripts in place for CI - Add Makefile with UCP standard entrypoints - Move Dockerfile into images/drydock per UCP standards - Add values.yaml entries for uWSGI threads and workers - Add environment variables to chart Deployment manifest for uWSGI thread and workers - Add threads and workers specification to uWSGI commandline in entrypoint - Test that the Drydock API is responding - Test that the Drydock API rejects noauth requests - Fix Makefile utility script to work behind a proxy Correct task success voting Some tasks were incorrectly considered partial_success even when no failure occurred. - Network configuration erroneously marked messages as errors - Update result propagation logic to only use the latest retry The deploy_nodes task ended as incomplete due to a missing subtask assignment Also added a node check step to prepare_nodes so that nodes that are already under provisioner control (MaaS) are not IPMI-rebooted. Tangential changes: - added config item to for leadership claim interval - added some debug logging to bootaction_report task - fix tasks list API endpoint to generate valid JSON Improve task concurrency When tasks are started with a scope of multiple nodes, split the main task so each node is managed independently to de-link the progression of nodes. - Split the prepare_nodes task - Begin reducing cyclomatic complexity to allow for better unit testing - Improved tox testing to include coverage by default - Include postgresql integration tests in coverage Closes #73 Change-Id: I600c2a4db74dd42e809bc3ee499fb945ebdf31f6
drydock_provisioner
A python REST orchestrator to translate a YAML host topology to a provisioned set of hosts and provide a set of post-provisioning instructions.
See full documentation in docs/source/index.rst.
Required
- Python 3.5+
- A running instance of Postgres v9.5+
- A running instance of Openstack Keystone w/ the v3 API enabled
- A running instance of Canonical MaaS v2.2+
Recommended
- A running Kubernetes cluster with Helm initialized
- Familiarity with the AT&T Community Undercloud Platform (UCP) suite of services
Building
This service is intended to be built as a Docker container, not as a standalone Python package. That being said, instructions are included below for building as a package and as an image.
Virtualenv
To build and install Drydock locally in a virtualenv first generate configuration and policy file templates to be customized
$ tox -e genconfig
$ tox -e genpolicy
$ virtualenv -p python3.5 /var/tmp/drydock
$ . /var/tmp/drydock/bin/activate
$ pip install -r requirements-lock.txt
$ pip install .
$ cp -r etc/drydock /etc/drydock
Docker image
$ docker build . -t drydock
Running
The preferred deployment pattern of Drydock is via a Helm chart to deploy Drydock into a Kubernetes cluster. Additionally use of the rest of the UCP services provides additional functionality for deploying (Armada) and using (Promenade, Deckhand) Drydock.
You can see an example of a full UCP deployment in the UCP Integration repository.
Stand up Kubernetes
Use the UCP Promenade tool for starting a self-hosted Kubernetes cluster with Kubernetes Helm deployed.
Deploy Drydock Dependencies
There are Helm charts for deploying all the dependencies of Dryodck. Use them for preparing your Kuberentes cluster to host Drydock.
Deploy Drydock
Ideally you will use the UCP Armada
tool for deploying the Drydock chart with proper overrides, but if not you can
use the helm
CLI tool. The below are overrides needed during deployment
- values.labels.node_selector_key: This is the kubernetes label assigned to the node you expect to host Drydock
- values.conf.dryodck.maasdriver: This is URL Drydock will use to access the MAAS API (including the URL path)
- values.images.drydock: The Drydock docker image to use
- values.images.drydock_db_sync: The Drydock docker image to use