Use bifrost for bare metal portion of infra-cloud

In spinning things up initially, it has become apparent that there isn't
actually a ton of value in running a full nova-ironic cloud for the bare
metal, and we can just use the bifrost work to do the simple job of
installing the base OS.

Change-Id: Ie206df90e8dc8ac3f18ffd0656fe3e4a70f398ce
This commit is contained in:
Monty Taylor 2015-07-14 13:01:36 -04:00 committed by Julia Kreger
parent 5f4fba8f45
commit c4130f4eb8

View File

@ -92,22 +92,22 @@ Management
* A "Ironic Controller" machine is installed by hand into each site. That
machine is enrolled into the puppet/ansible infrastructure.
* An all-in-one one node OpenStack cloud with Ironic as the Nova driver
is installed on each Ironic Controller node. The OpenStack Cloud
produced by this installation will be referred to as "Ironic Cloud
$site". In order to keep things simpler, these do not share anything
with the cloud that nodepool will make use of.
* The "Ironic Controller" will have bifrost installed on it. All of the
other machines in that site will be enrolled in the Ironic that bifrost
manages. bifrost will be responsible for booting base OS with IP address
and ssh key for each machine.
* Each additional machine in a site will be enrolled into the Ironic Cloud
as a bare metal resource.
* The machines will all be added to a manual ansible inventory file adjacent
to the dynamic inventory that ansible currently uses to run puppet. Any
metadata that the ansible infrastructure for running puppet needs that
would have come from OpenStack infrastructure will simply be put into
static ansible group_vars.
* Each Ironic Cloud $site will be added to the list of available clouds that
launch_node.py or the ansible replacement for it can use to spin up long
lived servers.
* The static inventory should be put into puppet so that it is public, with
the IPMI passwords in hiera.
* An OpenStack Cloud with KVM as the hypervisor will be installed using
launch_node and the OpenStack puppet modules as per normal infra
installation of services.
OpenStack puppet modules as per normal infra installation of services.
* As with all OpenStack services, metrics will be collected in public
cacti and graphite services. The particular metrics are TBD.