From c4130f4eb8964eb2b2ee621a9935fefe2f3a1187 Mon Sep 17 00:00:00 2001 From: Monty Taylor Date: Tue, 14 Jul 2015 13:01:36 -0400 Subject: [PATCH] Use bifrost for bare metal portion of infra-cloud In spinning things up initially, it has become apparent that there isn't actually a ton of value in running a full nova-ironic cloud for the bare metal, and we can just use the bifrost work to do the simple job of installing the base OS. Change-Id: Ie206df90e8dc8ac3f18ffd0656fe3e4a70f398ce --- doc/source/infra-cloud.rst | 24 ++++++++++++------------ 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/doc/source/infra-cloud.rst b/doc/source/infra-cloud.rst index 14d5e9b17b..625e445bc4 100644 --- a/doc/source/infra-cloud.rst +++ b/doc/source/infra-cloud.rst @@ -92,22 +92,22 @@ Management * A "Ironic Controller" machine is installed by hand into each site. That machine is enrolled into the puppet/ansible infrastructure. - * An all-in-one one node OpenStack cloud with Ironic as the Nova driver - is installed on each Ironic Controller node. The OpenStack Cloud - produced by this installation will be referred to as "Ironic Cloud - $site". In order to keep things simpler, these do not share anything - with the cloud that nodepool will make use of. + * The "Ironic Controller" will have bifrost installed on it. All of the + other machines in that site will be enrolled in the Ironic that bifrost + manages. bifrost will be responsible for booting base OS with IP address + and ssh key for each machine. - * Each additional machine in a site will be enrolled into the Ironic Cloud - as a bare metal resource. + * The machines will all be added to a manual ansible inventory file adjacent + to the dynamic inventory that ansible currently uses to run puppet. Any + metadata that the ansible infrastructure for running puppet needs that + would have come from OpenStack infrastructure will simply be put into + static ansible group_vars. - * Each Ironic Cloud $site will be added to the list of available clouds that - launch_node.py or the ansible replacement for it can use to spin up long - lived servers. + * The static inventory should be put into puppet so that it is public, with + the IPMI passwords in hiera. * An OpenStack Cloud with KVM as the hypervisor will be installed using - launch_node and the OpenStack puppet modules as per normal infra - installation of services. + OpenStack puppet modules as per normal infra installation of services. * As with all OpenStack services, metrics will be collected in public cacti and graphite services. The particular metrics are TBD.