README cleanups
Miscellaneous cleanups, including: - Fixed typos - Consistent markup of literals - Fixed markup of internal & external hyperlinks - Reflowed text where appropriate to be more considerate to readers viewing the plaintext version - Consistent capitalization for ironic - Update supported versions to reflect recent work Change-Id: I4ef35599d552a19d98c4a51893f443a54307c160
This commit is contained in:
parent
5f41b9a11a
commit
5cff30dc26
205
README.rst
205
README.rst
@ -1,28 +1,29 @@
|
|||||||
Bifrost
|
Bifrost
|
||||||
=======
|
=======
|
||||||
|
|
||||||
Bifrost is a set of Ansible playbooks that automates the task of deploying a
|
Bifrost is a set of Ansible playbooks that automates the task of
|
||||||
base image onto a set of known hardware using Ironic. It provides modular
|
deploying a base image onto a set of known hardware using ironic. It
|
||||||
utility for one-off operating system deployment with as few operational requirements
|
provides modular utility for one-off operating system deployment with
|
||||||
as reasonably possible.
|
as few operational requirements as reasonably possible.
|
||||||
|
|
||||||
This is split into roughly three steps:
|
This is split into roughly three steps:
|
||||||
|
|
||||||
- install:
|
- **install**:
|
||||||
prepare the local environment by downloading and/or building machine images,
|
prepare the local environment by downloading and/or building machine images,
|
||||||
and installing and configuring the necessary services.
|
and installing and configuring the necessary services.
|
||||||
- enroll-dynamic:
|
- **enroll-dynamic**:
|
||||||
take as input a customizable hardware inventory file and enroll the
|
take as input a customizable hardware inventory file and enroll the
|
||||||
listed hardware with Ironic, configuring each appropriately for deployment
|
listed hardware with ironic, configuring each appropriately for deployment
|
||||||
with the previously-downloaded images.
|
with the previously-downloaded images.
|
||||||
- deploy-dynamic:
|
- **deploy-dynamic**:
|
||||||
instruct Ironic to deploy the operating system onto each machine.
|
instruct ironic to deploy the operating system onto each machine.
|
||||||
|
|
||||||
Supported Operating Systems:
|
Supported operating systems:
|
||||||
|
|
||||||
* Ubuntu 14.04, 14.10
|
* Ubuntu 14.04, 14.10, 15.04
|
||||||
* Red Hat Enterprise Linux (RHEL) 7
|
* Red Hat Enterprise Linux (RHEL) 7
|
||||||
* CentOS 7
|
* CentOS 7
|
||||||
|
* Fedora 22
|
||||||
|
|
||||||
Pre-install steps
|
Pre-install steps
|
||||||
=================
|
=================
|
||||||
@ -32,7 +33,8 @@ Installing bifrost on RHEL or CentOS requires a few extra pre-install steps.
|
|||||||
Enable additional repositories (RHEL only)
|
Enable additional repositories (RHEL only)
|
||||||
------------------------------------------
|
------------------------------------------
|
||||||
|
|
||||||
The extras and optional yum repositories must be enabled to satisfy bifrost's dependencies. To check::
|
The extras and optional yum repositories must be enabled to satisfy
|
||||||
|
bifrost's dependencies. To check::
|
||||||
|
|
||||||
sudo yum repolist | grep 'optional\|extras'
|
sudo yum repolist | grep 'optional\|extras'
|
||||||
|
|
||||||
@ -57,7 +59,9 @@ Use the names of the repositories (minus the version and architecture) to enable
|
|||||||
Enable the EPEL repository (RHEL)
|
Enable the EPEL repository (RHEL)
|
||||||
---------------------------------
|
---------------------------------
|
||||||
|
|
||||||
The Extra Packages for Enterprise Linux (EPEL) repository contains some of bifrost's dependencies. To enable it, install the `epel-release` package as follows::
|
The Extra Packages for Enterprise Linux (EPEL) repository contains
|
||||||
|
some of bifrost's dependencies. To enable it, install the
|
||||||
|
``epel-release`` package as follows::
|
||||||
|
|
||||||
sudo yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
|
sudo yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
|
||||||
|
|
||||||
@ -82,17 +86,24 @@ installation, or image generation. The baremetal file is geared for steps
|
|||||||
performed on baremetal nodes, such as enrollment, deployment, or any other
|
performed on baremetal nodes, such as enrollment, deployment, or any other
|
||||||
custom playbooks that a user may bolt on to this toolkit.
|
custom playbooks that a user may bolt on to this toolkit.
|
||||||
|
|
||||||
- If MySQL is already installed, update mysql_password to match your local installation.
|
- If MySQL is already installed, update ``mysql_password`` to match
|
||||||
- Change network_interface to match the interface that will need to service DHCP requests.
|
your local installation.
|
||||||
- Change the ironic_db_password which is set by Ansible in MySQL and in Ironic's configuration file.
|
- Change ``network_interface`` to match the interface that will need
|
||||||
|
to service DHCP requests.
|
||||||
|
- Change the ``ironic_db_password`` which is set by Ansible in MySQL
|
||||||
|
and in ironic's configuration file.
|
||||||
|
|
||||||
The install process builds or modifies a disk image to deploy. The following two settings (which are mutually exclusive) allow you to choose if a partition image is used or an image is created with diskimage-builder::
|
The install process builds or modifies a disk image to deploy. The
|
||||||
|
following two settings (which are mutually exclusive) allow you to
|
||||||
|
choose if a partition image is used or an image is created with
|
||||||
|
diskimage-builder::
|
||||||
|
|
||||||
create_image_via_dib: true
|
create_image_via_dib: true
|
||||||
transform_boot_image: false
|
transform_boot_image: false
|
||||||
|
|
||||||
If running behind the proxy. export environment variables http_proxy and https_proxy
|
If you are running the installation behind a proxy, export the
|
||||||
So that ansible lookup plugin checks for proxy set and uses as environment variables.
|
environment variables ``http_proxy`` and ``https_proxy`` so that
|
||||||
|
Ansible will use these proxy settings.
|
||||||
|
|
||||||
Then run::
|
Then run::
|
||||||
|
|
||||||
@ -100,11 +111,11 @@ Then run::
|
|||||||
source /opt/stack/ansible/hacking/env-setup
|
source /opt/stack/ansible/hacking/env-setup
|
||||||
cd playbooks
|
cd playbooks
|
||||||
|
|
||||||
The second part is an Ansible playbook that installs and configures Ironic
|
The second part is an Ansible playbook that installs and configures ironic
|
||||||
in a stand-alone fashion.
|
in a stand-alone fashion.
|
||||||
|
|
||||||
* Keystone is NOT installed, and Ironic's API is accessible without
|
* Keystone is NOT installed, and ironic's API is accessible without
|
||||||
authentication. It is possible to put basic password auth on Ironic's API by
|
authentication. It is possible to put basic password auth on ironic's API by
|
||||||
changing the nginx configuration accordingly.
|
changing the nginx configuration accordingly.
|
||||||
* Neutron is NOT installed. Ironic performs static IP injection via
|
* Neutron is NOT installed. Ironic performs static IP injection via
|
||||||
config-drive.
|
config-drive.
|
||||||
@ -113,30 +124,30 @@ in a stand-alone fashion.
|
|||||||
from Nginx.
|
from Nginx.
|
||||||
* Deployments are performed by the Ironic Python Agent, which as configured
|
* Deployments are performed by the Ironic Python Agent, which as configured
|
||||||
supports IPMI, iLO, and UCS drivers. AMT driver support is also enabled,
|
supports IPMI, iLO, and UCS drivers. AMT driver support is also enabled,
|
||||||
however it should only be use for testing as due to a known bug which
|
however it should only be used for testing as due to a known bug which
|
||||||
can be read about at https://bugs.launchpad.net/ironic/+bug/1454492.
|
can be read about at https://bugs.launchpad.net/ironic/+bug/1454492.
|
||||||
* By default, installation will build an Ubuntu based image for deployment
|
* By default, installation will build an Ubuntu-based image for deployment
|
||||||
to nodes. This image can be easily customized if so desired.
|
to nodes. This image can be easily customized if so desired.
|
||||||
|
|
||||||
The re-execution of the playbook will cause states to be re-asserted. If not
|
The re-execution of the playbook will cause states to be re-asserted. If not
|
||||||
already present, a number of software packages including MySQL and RabbitMQ
|
already present, a number of software packages including MySQL and RabbitMQ
|
||||||
will be installed on the host. Python code will be re-installed regardless if
|
will be installed on the host. Python code will be reinstalled regardless if
|
||||||
it has changed, RabbitMQ user passwords will be reset, and services will be
|
it has changed, RabbitMQ user passwords will be reset, and services will be
|
||||||
restarted.
|
restarted.
|
||||||
|
|
||||||
Run::
|
Run::
|
||||||
|
|
||||||
If you have password-less sudo enabled, run:
|
If you have passwordless sudo enabled, run:
|
||||||
ansible-playbook -vvvv -i inventory/localhost install.yaml
|
ansible-playbook -vvvv -i inventory/localhost install.yaml
|
||||||
Otherwise, add -K option to let Ansible prompting for the sudo password:
|
Otherwise, add -K option to let Ansible prompting for the sudo password:
|
||||||
ansible-playbook -K -vvvv -i inventory/localhost install.yaml
|
ansible-playbook -K -vvvv -i inventory/localhost install.yaml
|
||||||
|
|
||||||
With regards to testing, ironic's node cleaning capability is disabled by
|
With regard to testing, ironic's node cleaning capability is disabled by
|
||||||
default as it can be an unexpected surprise for a new user that their test
|
default as it can be an unexpected surprise for a new user that their test
|
||||||
node is unusable for however long it takes for the disks to be wiped.
|
node is unusable for however long it takes for the disks to be wiped.
|
||||||
|
|
||||||
If you wish to enable cleaning, you can achieve this by passing the option
|
If you wish to enable cleaning, you can achieve this by passing the option
|
||||||
"-e cleaning=true" to the command line or executing the command below.::
|
``-e cleaning=true`` to the command line or executing the command below.::
|
||||||
|
|
||||||
ansible-playbook -K -vvvv -i inventory/localhost install.yaml -e cleaning=true
|
ansible-playbook -K -vvvv -i inventory/localhost install.yaml -e cleaning=true
|
||||||
|
|
||||||
@ -144,34 +155,34 @@ After you have performed an installation, you can edit /etc/ironic/ironic.conf
|
|||||||
to enable or disable cleaning as desired, however it is highly encouraged to
|
to enable or disable cleaning as desired, however it is highly encouraged to
|
||||||
utilize cleaning in any production environment.
|
utilize cleaning in any production environment.
|
||||||
|
|
||||||
Manual CLI Use
|
Manual CLI use
|
||||||
--------------
|
--------------
|
||||||
|
|
||||||
If you wish to utilize Ironic's CLI in no-auth mode, you must set two
|
If you wish to utilize ironic's CLI in no-auth mode, you must set two
|
||||||
environment variables:
|
environment variables:
|
||||||
|
|
||||||
- IRONIC_URL - A URL to the Ironic API, such as http://localhost:6385/
|
- ``IRONIC_URL`` - A URL to the ironic API, such as http://localhost:6385/
|
||||||
- OS_AUTH_TOKEN - Any value, such as an empty space, is required to cause the client library to send requests directly to the API.
|
- ``OS_AUTH_TOKEN`` - Any value, such as an empty space, is required to
|
||||||
|
cause the client library to send requests directly to the API.
|
||||||
|
|
||||||
For your ease of use, `env-vars` can be sourced to allow the CLI to connect
|
For your ease of use, ``env-vars`` can be sourced to allow the CLI to connect
|
||||||
to a local Ironic installation operating in noauth mode.
|
to a local ironic installation operating in noauth mode.
|
||||||
|
|
||||||
|
Hardware enrollment
|
||||||
Hardware Enrollment
|
|
||||||
===================
|
===================
|
||||||
|
|
||||||
The following requirements are installed during the Install step above:
|
The following requirements are installed during the `Installation`_ step above:
|
||||||
|
|
||||||
- openstack-infra/shade library
|
- openstack-infra/shade library
|
||||||
- openstack-infra/os-client-config
|
- openstack-infra/os-client-config
|
||||||
|
|
||||||
In order to enroll hardware, you will naturally need an inventory of your
|
In order to enroll hardware, you will naturally need an inventory of
|
||||||
hardware. When utilizing the dynamic inventory module and accompanying roles
|
your hardware. When utilizing the dynamic inventory module and
|
||||||
this can be supplied in one of three ways, all of which ultimately translate
|
accompanying roles the inventory can be supplied in one of three ways,
|
||||||
to JSON data that Ansible parses.
|
all of which ultimately translate to JSON data that Ansible parses.
|
||||||
|
|
||||||
The original method is to utilize a CSV file. This format is covered below in
|
The original method is to utilize a CSV file. This format is covered below in
|
||||||
the `Legacy CSV File Format` section. This has a number of limitations, but
|
the `Legacy CSV File Format`_ section. This has a number of limitations, but
|
||||||
does allow a user to bulk load hardware from an inventory list with minimal
|
does allow a user to bulk load hardware from an inventory list with minimal
|
||||||
data transformations.
|
data transformations.
|
||||||
|
|
||||||
@ -179,15 +190,15 @@ The newer method is to utilize a JSON or YAML document which the inventory
|
|||||||
parser will convert and provide to Ansible.
|
parser will convert and provide to Ansible.
|
||||||
|
|
||||||
In order to use, you will need to define the environment variable
|
In order to use, you will need to define the environment variable
|
||||||
`BIFROST_INVENTORY_SOURCE` to equal a file, which then allows you to
|
``BIFROST_INVENTORY_SOURCE`` to equal a file, which then allows you to
|
||||||
execute Ansible utilizing the bifrost_inventory.py file as the data source.
|
execute Ansible utilizing the ``bifrost_inventory.py`` file as the data source.
|
||||||
|
|
||||||
Conversion from CSV to JSON formats
|
Conversion from CSV to JSON formats
|
||||||
-----------------------------------
|
-----------------------------------
|
||||||
|
|
||||||
The inventory/bifrost_inventory.py program additionally features a mode that
|
The ``inventory/bifrost_inventory.py`` program additionally features a
|
||||||
allows a user to convert a CSV file to the JSON data format utilizing a
|
mode that allows a user to convert a CSV file to the JSON data format
|
||||||
`--convertcsv` command line setting when directly invoked.
|
utilizing a ``--convertcsv`` command line setting when directly invoked.
|
||||||
|
|
||||||
Example::
|
Example::
|
||||||
|
|
||||||
@ -197,11 +208,12 @@ Example::
|
|||||||
JSON file format
|
JSON file format
|
||||||
----------------
|
----------------
|
||||||
|
|
||||||
The JSON format closely resembles the data structure that Ironic utilizes
|
The JSON format closely resembles the data structure that ironic
|
||||||
internally. The name, driver_info, nics, driver, and properties fields are
|
utilizes internally. The ``name``, ``driver_info``, ``nics``,
|
||||||
directly mapped through to Ironic. This means that the data contained within
|
``driver``, and ``properties`` fields are directly mapped through to
|
||||||
can vary from host to host, such as drivers and their parameters thus allowing
|
ironic. This means that the data contained within can vary from host
|
||||||
a mixed hardware environment to be defined in a single file.
|
to host, such as drivers and their parameters thus allowing a mixed
|
||||||
|
hardware environment to be defined in a single file.
|
||||||
|
|
||||||
Example::
|
Example::
|
||||||
|
|
||||||
@ -238,12 +250,12 @@ Example::
|
|||||||
The additional power of this format is easy configuration parameter injection,
|
The additional power of this format is easy configuration parameter injection,
|
||||||
which could potentially allow a user to provision different operating system
|
which could potentially allow a user to provision different operating system
|
||||||
images onto different hardware chassis by defining the appropriate settings
|
images onto different hardware chassis by defining the appropriate settings
|
||||||
in an "instance_info" variable.
|
in an ``instance_info`` variable.
|
||||||
|
|
||||||
Examples utilizing JSON and YAML formatting, along host specific variable
|
Examples utilizing JSON and YAML formatting, along host specific variable
|
||||||
injection can be found in the playbooks/inventory/ folder.
|
injection can be found in the ``playbooks/inventory/`` folder.
|
||||||
|
|
||||||
Legacy CSV File Format
|
Legacy CSV file format
|
||||||
----------------------
|
----------------------
|
||||||
|
|
||||||
The CSV file has the following columns:
|
The CSV file has the following columns:
|
||||||
@ -260,10 +272,10 @@ The CSV file has the following columns:
|
|||||||
9. Host UUID
|
9. Host UUID
|
||||||
10. Host or Node name
|
10. Host or Node name
|
||||||
11. Host IP Address to be set
|
11. Host IP Address to be set
|
||||||
12. ipmi_target_channel - Requires: ipmi_bridging set to single
|
12. ``ipmi_target_channel`` - Requires: ``ipmi_bridging`` set to single
|
||||||
13. ipmi_target_address - Requires: ipmi_bridging set to single
|
13. ``ipmi_target_address`` - Requires: ``ipmi_bridging`` set to single
|
||||||
14. ipmi_transit_channel - Requires: ipmi_bridging set to dual
|
14. ``ipmi_transit_channel`` - Requires: ``ipmi_bridging`` set to dual
|
||||||
15. ipmi_transit_address - Requires: ipmi_bridging set to dual
|
15. ``ipmi_transit_address`` - Requires: ``ipmi_bridging`` set to dual
|
||||||
16. ironic driver
|
16. ironic driver
|
||||||
|
|
||||||
Example definition::
|
Example definition::
|
||||||
@ -275,26 +287,26 @@ although the enrollment and deployment playbooks utilize the model
|
|||||||
of a host per line model in order to process through the entire
|
of a host per line model in order to process through the entire
|
||||||
list, as well as reference the specific field items.
|
list, as well as reference the specific field items.
|
||||||
|
|
||||||
An example file can be found at: playbooks/inventory/baremetal.csv.example
|
An example file can be found at: ``playbooks/inventory/baremetal.csv.example``
|
||||||
|
|
||||||
How this works?
|
How this works?
|
||||||
---------------
|
---------------
|
||||||
|
|
||||||
Utilizing the dynamic inventory module, enrollment is as simple as setting
|
Utilizing the dynamic inventory module, enrollment is as simple as setting
|
||||||
the BIFROST_INVENTORY_SOURCE environment variable to your inventory data
|
the ``BIFROST_INVENTORY_SOURCE`` environment variable to your inventory data
|
||||||
source, and then executing the enrollment playbook.::
|
source, and then executing the enrollment playbook.::
|
||||||
|
|
||||||
export BIFROST_INVENTORY_SOURCE=/tmp/baremetal.json
|
export BIFROST_INVENTORY_SOURCE=/tmp/baremetal.json
|
||||||
ansible-playbook -vvvv -i inventory/bifrost_inventory.py enroll-dynamic.yaml
|
ansible-playbook -vvvv -i inventory/bifrost_inventory.py enroll-dynamic.yaml
|
||||||
|
|
||||||
Note that enrollment is a one-time operation. The Ansible module *does not*
|
Note that enrollment is a one-time operation. The Ansible module *does not*
|
||||||
synchronize data for existing nodes. You should use the Ironic CLI to do this
|
synchronize data for existing nodes. You should use the ironic CLI to do this
|
||||||
manually at the moment.
|
manually at the moment.
|
||||||
|
|
||||||
Additionally, it is important to note that the playbooks for enrollment are
|
Additionally, it is important to note that the playbooks for enrollment are
|
||||||
split into three separate playbooks based up the setting of ipmi_bridging.
|
split into three separate playbooks based on the ``ipmi_bridging`` setting.
|
||||||
|
|
||||||
Hardware Deployment
|
Hardware deployment
|
||||||
===================
|
===================
|
||||||
|
|
||||||
How this works?
|
How this works?
|
||||||
@ -313,9 +325,20 @@ To utilize the newer dynamic inventory based deployment::
|
|||||||
Testing with a single command
|
Testing with a single command
|
||||||
=============================
|
=============================
|
||||||
|
|
||||||
A simple ``scripts/test-bifrost.sh`` script can be utilized to install pre-requisite software packages, Ansible, and then execute the test-bifrost.yaml playbook in order to provide a single step testing mechanism.
|
A simple ``scripts/test-bifrost.sh`` script can be utilized to install
|
||||||
|
pre-requisite software packages, Ansible, and then execute the
|
||||||
|
test-bifrost.yaml playbook in order to provide a single step testing
|
||||||
|
mechanism.
|
||||||
|
|
||||||
The playbook utilized by the script, ``playbooks/test-bifrost-dynamic.yaml``, is a single playbook that will create a local virtual machine, save a baremetal.csv file out, and then utilize it to execute the remaining roles. Two additional roles are invoked by this playbook which enables Ansible to connect to the new nodes by adding them to the inventory, and then logging into the remote machine via the user's ssh host key. Once that has successfully occurred, additional roles will unprovision the host(s) and delete them from Ironic.
|
The playbook utilized by the script,
|
||||||
|
``playbooks/test-bifrost-dynamic.yaml``, is a single playbook that
|
||||||
|
will create a local virtual machine, save a baremetal.csv file out,
|
||||||
|
and then utilize it to execute the remaining roles. Two additional
|
||||||
|
roles are invoked by this playbook which enables Ansible to connect to
|
||||||
|
the new nodes by adding them to the inventory, and then logging into
|
||||||
|
the remote machine via the user's ssh host key. Once that has
|
||||||
|
successfully occurred, additional roles will unprovision the host(s)
|
||||||
|
and delete them from ironic.
|
||||||
|
|
||||||
Command::
|
Command::
|
||||||
|
|
||||||
@ -323,37 +346,47 @@ Command::
|
|||||||
|
|
||||||
Note:
|
Note:
|
||||||
|
|
||||||
- Cleaning mode is explicitly disabled in the test-bifrost.yaml playbook due to the fact that is an IO intensive operation that can take a great deal of time.
|
- Cleaning mode is explicitly disabled in the ``test-bifrost.yaml``
|
||||||
|
playbook due to the fact that is an IO-intensive operation that can
|
||||||
|
take a great deal of time.
|
||||||
|
|
||||||
Legacy - Testing with Virtual Machines
|
Legacy - testing with virtual machines
|
||||||
======================================
|
======================================
|
||||||
|
|
||||||
Bifrost supports using virtual machines to emulate the hardware. All of the
|
Bifrost supports using virtual machines to emulate the hardware. All of the
|
||||||
steps mentioned above are mostly the same.
|
steps mentioned above are mostly the same.
|
||||||
|
|
||||||
It is assumed you have an SSH server running on the host machine. The ``agent_ssh``
|
It is assumed you have an SSH server running on the host machine. The
|
||||||
driver, used by Ironic with VM testing, will need to use SSH to control the
|
``agent_ssh`` driver, used by ironic with VM testing, will need to use
|
||||||
virtual machines.
|
SSH to control the virtual machines.
|
||||||
|
|
||||||
An SSH key is generated for the ``ironic`` user when testing. The ironic conductor
|
An SSH key is generated for the ``ironic`` user when testing. The
|
||||||
will use this key to connect to the host machine and run virsh commands.
|
ironic conductor will use this key to connect to the host machine and
|
||||||
|
run virsh commands.
|
||||||
|
|
||||||
#. Set ``testing`` to *true* in the ``playbooks/inventory/group_vars/localhost`` file.
|
#. Set ``testing`` to *true* in the
|
||||||
|
``playbooks/inventory/group_vars/localhost`` file.
|
||||||
#. You may need to adjust the value for ``ssh_public_key_path``.
|
#. You may need to adjust the value for ``ssh_public_key_path``.
|
||||||
#. Run the install step, as documented above, however adding "-e testing=true" to the Ansible command line.
|
#. Run the install step, as documented above, however adding ``-e
|
||||||
#. Execute the ``ansible-playbook -vvvv -i inventory/localhost test-bifrost-create-vm.yaml`` command to create a test virtual machine.
|
testing=true`` to the Ansible command line.
|
||||||
#. Set the environment variable of BIFROST_INVENTORY_SOURCE to the path to the csv file, which by default has been written to /tmp/baremetal.csv.
|
#. Execute the ``ansible-playbook -vvvv -i inventory/localhost
|
||||||
#. Run the enrollment step, as documented above, using the CSV file you created in the previous step.
|
test-bifrost-create-vm.yaml`` command to create a test virtual
|
||||||
|
machine.
|
||||||
|
#. Set the environment variable of ``BIFROST_INVENTORY_SOURCE`` to the
|
||||||
|
path to the csv file, which by default has been written to
|
||||||
|
/tmp/baremetal.csv.
|
||||||
|
#. Run the enrollment step, as documented above, using the CSV file
|
||||||
|
you created in the previous step.
|
||||||
#. Run the deployment step, as documented above.
|
#. Run the deployment step, as documented above.
|
||||||
|
|
||||||
Deployment and configuration of Operating Systems
|
Deployment and configuration of operating systems
|
||||||
=================================================
|
=================================================
|
||||||
|
|
||||||
By default, Bifrost deploys a configuration drive which includes the user SSH
|
By default, Bifrost deploys a configuration drive which includes the user SSH
|
||||||
public key, hostname, and the network configuration in the form of
|
public key, hostname, and the network configuration in the form of
|
||||||
network_info.json that can be read/parsed by the
|
network_info.json that can be read/parsed by the
|
||||||
`glean <https://github.com/openstack-infra/glean>` utility. This allows for
|
`glean <https://github.com/openstack-infra/glean>`_ utility. This allows for
|
||||||
the deployment of Ubuntu, CentOS, Fedora "tenants" on baremetal. This file
|
the deployment of Ubuntu, CentOS, or Fedora "tenants" on baremetal. This file
|
||||||
format is not yet supported by Cloud-Init, however it is on track for
|
format is not yet supported by Cloud-Init, however it is on track for
|
||||||
inclusion in cloud-init 2.0.
|
inclusion in cloud-init 2.0.
|
||||||
|
|
||||||
@ -363,7 +396,7 @@ means that by default, root file systems may not be automatically expanded
|
|||||||
to consume the entire disk, which may, or may not be desirable depending
|
to consume the entire disk, which may, or may not be desirable depending
|
||||||
upon operational needs. This is dependent upon what base OS image you
|
upon operational needs. This is dependent upon what base OS image you
|
||||||
utilize, and if the support is included in that image or not. At present,
|
utilize, and if the support is included in that image or not. At present,
|
||||||
the standard ubuntu cloud image includes cloud-init which will grow the
|
the standard Ubuntu cloud image includes cloud-init which will grow the
|
||||||
root partition, however the ubuntu-minimal image does not include cloud-init
|
root partition, however the ubuntu-minimal image does not include cloud-init
|
||||||
and thus will not automatically grow the root partition.
|
and thus will not automatically grow the root partition.
|
||||||
|
|
||||||
@ -371,15 +404,15 @@ Due to the nature of the design, it would be relatively easy for a user to
|
|||||||
import automatic growth or reconfiguration steps either in the image to be
|
import automatic growth or reconfiguration steps either in the image to be
|
||||||
deployed, or in post-deployment steps via custom Ansible playbooks.
|
deployed, or in post-deployment steps via custom Ansible playbooks.
|
||||||
|
|
||||||
Custom IPA Images
|
Custom IPA images
|
||||||
=================
|
=================
|
||||||
|
|
||||||
Bifrost supports the ability for a user to build a custom IPA ramdisk
|
Bifrost supports the ability for a user to build a custom IPA ramdisk
|
||||||
utilizing the diskimage-builder element "ironic-agent". In order to utilize
|
utilizing the diskimage-builder element "ironic-agent". In order to utilize
|
||||||
this feature, the download_ipa setting must be set to "false" and the
|
this feature, the ``download_ipa`` setting must be set to ``false`` and the
|
||||||
create_ipa_image must be set to "true". By default, the playbook will build
|
create_ipa_image must be set to "true". By default, the playbook will build
|
||||||
a Debian based IPA image, if a pre-existing IPA image is not present on disk.
|
a Debian based IPA image, if a pre-existing IPA image is not present on disk.
|
||||||
|
|
||||||
If you wish to include an extra element into the IPA disk image, such as a
|
If you wish to include an extra element into the IPA disk image, such as a
|
||||||
custom hardware manager, you can pass the variable "ipa_extra_dib_elements"
|
custom hardware manager, you can pass the variable ``ipa_extra_dib_elements``
|
||||||
as a space separated list of elements. This defaults to an emtpy string.
|
as a space-separated list of elements. This defaults to an empty string.
|
||||||
|
Loading…
Reference in New Issue
Block a user