nodepool/doc/source/operation.rst
Monty Taylor 066942a0ac Stop json-encoding the nodepool metadata
When we first started putting nodepool metadata into the server record
in OpenStack, we json encoded the data so that we could store a dict
into a field that only takes strings. We were also going to teach the
ansible OpenStack Inventory about this so that it could read the data
out of the groups list. However, ansible was not crazy about accepting
"attempt to json decode values in the metadata" since json-encoded
values are not actually part of the interface OpenStack expects - which
means one of our goals, which is ansible inventory groups based on
nodepool information is no longer really a thing.

We could push harder on that, but we actually don't need the functionality
we're getting from the json encoding. The OpenStack Inventory has
supported comma separated lists of groups since before day one. And the
other nodepool info we're storing stores and fetches just as easily
with 4 different top level keys as it does in a json dict - and is
easier to read and deal with when just looking at server records.
Finally, nova has a 255 byte limit on size of the value that can be
stored, so we cannot grow the information in the nodepool dict
indefinitely anyway.

Migrate the data to store into nodepool_ variables and a comma separated
list for groups. Consume both forms, so that people upgrading will not
lose track of existing stock of nodes.

Finally, we don't use snapshot_id anymore - so remove it.

Change-Id: I2c06dc7c2faa19e27d1fb1d9d6df78da45ffa6dd
2017-03-10 16:24:03 -05:00

5.8 KiB

Operation

Nodepool has two components which run as daemons. The nodepool-builder daemon is responsible for building diskimages and uploading them to providers, and the nodepoold daemon is responsible for launching and deleting nodes.

Both daemons frequently re-read their configuration file after starting to support adding or removing new images and providers, or otherwise altering the configuration.

Nodepool-builder

The nodepool-builder daemon builds and uploads images to providers. It may be run on the same or a separate host as the main nodepool daemon. Multiple instances of nodepool-builder may be run on the same or separate hosts in order to speed up image builds across many machines, or supply high-availability or redundancy. However, since nodepool-builder allows specification of the number of both build and upload threads, it is usually not advantageous to run more than a single instance on one machine. Note that while diskimage-builder (which is responsible for building the underlying images) generally supports executing multiple builds on a single machine simultaneously, some of the elements it uses may not. To be safe, it is recommended to run a single instance of nodepool-builder on a machine, and configure that instance to run only a single build thread (the default).

Nodepoold

The main nodepool daemon is named nodepoold and is responsible for launching instances from the images created and uploaded by nodepool-builder.

When a new image is created and uploaded, nodepoold will immediately start using it when launching nodes (Nodepool always uses the most recent image for a given provider in the ready state). Nodepool will delete images if they are not the most recent or second most recent ready images. In other words, Nodepool will always make sure that in addition to the current image, it keeps the previous image around. This way if you find that a newly created image is problematic, you may simply delete it and Nodepool will revert to using the previous image.

Daemon usage

To start the main Nodepool daemon, run nodepoold:

nodepoold --help

To start the nodepool-builder daemon, run nodepool--builder:

nodepool-builder --help

To stop a daemon, send SIGINT to the process.

When yappi (Yet Another Python Profiler) is available, additional functions' and threads' stats are emitted as well. The first SIGUSR2 will enable yappi, on the second SIGUSR2 it dumps the information collected, resets all yappi state and stops profiling. This is to minimize the impact of yappi on a running system.

Metadata

When Nodepool creates instances, it will assign the following nova metadata:

groups

A comma separated list containing the name of the image and the name of the provider. This may be used by the Ansible OpenStack inventory plugin.

nodepool_image_name

The name of the image as a string.

nodepool_provider_name

The name of the provider as a string.

nodepool_node_id

The nodepool id of the node as an integer.

Command Line Tools

Usage

The general options that apply to all subcommands are:

nodepool --help

The following subcommands deal with nodepool images:

dib-image-list

nodepool dib-image-list --help

image-list

nodepool image-list --help

image-build

nodepool image-build --help

dib-image-delete

nodepool dib-image-delete --help

image-delete

nodepool image-delete --help

The following subcommands deal with nodepool nodes:

list

nodepool list --help

hold

nodepool hold --help

delete

nodepool delete --help

If Nodepool's database gets out of sync with reality, the following commands can help identify compute instances or images that are unknown to Nodepool:

alien-list

nodepool alien-list --help

alien-image-list

nodepool alien-image-list --help

In the case that a job is randomly failing for an unknown cause, it may be necessary to instruct nodepool to automatically hold a node on which that job has failed. To do so, use the the job-create command to specify the job name and how many failed nodes should be held. When debugging is complete, use ''job-delete'' to disable the feature.

job-create

nodepool job-create --help

job-list

nodepool job-list --help

job-delete

nodepool job-delete --help

Removing a Provider

To remove a provider, remove all of the images from that provider`s configuration (and remove all instances of that provider from any labels) and set that provider's max-servers to -1. This will instruct Nodepool to delete any images uploaded to that provider, not upload any new ones, and stop booting new nodes on the provider. You can then let the nodes go through their normal lifecycle. Once all nodes have been deleted you remove the config from nodepool for that provider entirely (though leaving it in this state is effectively the same and makes it easy to turn the provider back on).

If urgency is required you can delete the nodes directly instead of waiting for them to go through their normal lifecycle but the effect is the same.