Merge "Add some documentation about VirtualBMC and pxe_ipmitool"

This commit is contained in:
Jenkins 2017-04-06 11:05:32 +00:00 committed by Gerrit Code Review
commit 888c4db52d
5 changed files with 179 additions and 25 deletions

View File

@ -40,12 +40,13 @@ three nodes in the ``instackenv.json``, you can split them::
The format of the remaining nodes is TripleO-specific, so we need
to convert it to something Ironic can understand without using
TripleO workflows::
TripleO workflows. E.g. for node using IPMI::
jq '.nodes[2:3] | {nodes: map({driver: .pm_type, name: .name,
driver_info: {ssh_username: .pm_user, ssh_address: .pm_addr,
ssh_key_contents: .pm_password, ssh_virt_type: "virsh"},
properties: {cpus: .cpu, cpu_arch: .arch, local_gb: .disk, memory_mb: .memory},
driver_info: {ipmi_username: .pm_user, ipmi_address: .pm_addr,
ipmi_password: .pm_password},
properties: {cpus: .cpu, cpu_arch: .arch,
local_gb: .disk, memory_mb: .memory},
ports: .mac | map({address: .})})}' instackenv.json > overcloud-nodes.yaml
.. note::
@ -53,6 +54,12 @@ TripleO workflows::
TripleO-specific, e.g. they force local boot instead of network boot used
by default in Ironic.
.. note::
If you use :doc:`../environments/virtualbmc`, make sure to follow
:ref:`create-vbmc` for the overcloud nodes as well, and correctly populate
``ipmi_port``. If needed, change ``ipmi_address`` to the address of the
virtual host, which is accessible from controllers.
Then enroll only ``undercloud.json`` in your undercloud::
source stackrc
@ -92,21 +99,27 @@ environment file (``ironic-config.yaml`` in this guide):
.. admonition:: Virtual
:class: virtual
Testing on a virtual environment requires the ``pxe_ssh`` driver to be
explicitly enabled, for example::
Starting with the Ocata release, testing on a virtual environment
requires using :doc:`../environments/virtualbmc`.
parameter_defaults:
IronicEnabledDrivers:
- pxe_ssh
.. admonition:: Stable Branches
:class: stable
If you used **tripleo-quickstart** to build your environment, the
resulting configuration is a bit different::
Before the Ocata release, a separate ``pxe_ssh`` driver has to be
enabled for virtual testing, for example::
parameter_defaults:
IronicEnabledDrivers:
- pxe_ssh
ControllerExtraConfig:
ironic::drivers::ssh::libvirt_uri: 'qemu:///session'
parameter_defaults:
IronicEnabledDrivers:
- pxe_ssh
If you used **tripleo-quickstart** to build your environment, the
resulting configuration is a bit different::
parameter_defaults:
IronicEnabledDrivers:
- pxe_ssh
ControllerExtraConfig:
ironic::drivers::ssh::libvirt_uri: 'qemu:///session'
* ``NovaSchedulerDefaultFilters`` configures available
scheduler filters. For a hybrid deployment it's important to prepend

View File

@ -247,17 +247,29 @@ The most up-to-date information about Ironic drivers is at
http://docs.openstack.org/developer/ironic/deploy/drivers.html, but note that
this page always targets Ironic git master, not the release we use.
There are 2 generic drivers:
This most generic driver is ``pxe_ipmitool``. It uses `ipmitool`_ utility
to manage a bare metal node, and supports a vast variety of hardware.
* ``pxe_ipmitool`` driver uses `ipmitool`_ utility to manage a bare metal
node, and supports vast variety of hardware.
.. admonition:: Virtual
:class: virtual
* ``pxe_ssh`` is a special driver for testing Ironic in the virtual
environment. This driver connects to the hypervisor to conduct management
operations on virtual nodes. In case of this driver, ``pm_addr`` is a
hypervisor address, ``pm_user`` is a SSH user name for accessing hypervisor,
``pm_password`` is a private SSH key for accessing hypervisor. Note that
private key must not be encrypted.
When combined with :doc:`virtualbmc` this driver can be used for developing
and testing TripleO in a virtual environment as well.
.. admonition:: Stable Branch
:class: stable
Prior to the Ocata release, a special ``pxe_ssh`` driver was used for
testing Ironic in the virtual environment. This driver connects to the
hypervisor to conduct management operations on virtual nodes. In case of
this driver, ``pm_addr`` is a hypervisor address, ``pm_user`` is a SSH
user name for accessing hypervisor, ``pm_password`` is a private SSH
key for accessing hypervisor. Note that private key must not be
encrypted.
.. warning::
The ``pxe_ssh`` driver is deprecated and ``pxe_ipmitool`` +
:doc:`virtualbmc` should be used instead.
Ironic also provides specific drivers for some types of hardware:

View File

@ -9,3 +9,4 @@ section contains instructions on how to setup your environments properly.
virtual
baremetal
virtualbmc

View File

@ -0,0 +1,113 @@
VirtualBMC
==========
VirtualBMC_ is a small CLI that allows users to create a virtual BMC_
to manage a virtual machines using the IPMI_ protocol, similar to how real
bare metal machines are managed. It can be used to to enable testing
bare metal deployments in completely virtual environments.
.. admonition:: Stable Branch
:class: stable
Ironic also ships a ``pxe_ssh`` driver that can be used for that purpose,
but it has been deprecated and its use is discouraged.
.. warning::
VirtualBMC is not meant for production environments.
Installation
------------
VirtualBMC is available from RDO repositories starting with the Ocata release::
sudo yum install -y python-virtualbmc
It is usually installed and used on the hypervisor where the virtual machines
reside.
.. _create-vbmc:
Creating virtual BMC
--------------------
Every virtual machine needs its own virtual BMC. Create it with::
vbmc add <domain> --port 6230 --username admin --password password
.. note::
You need to use a different port for each domain. Port numbers
lower than 1025 requires the user to have root privilege in the system.
.. note::
For **tripleo-quickstart** you may have to specify
``--libvirt-uri=qemu:///session``.
Start the virtual BMCs::
vbmc start <domain>
.. warning::
This step has to be repeated after virtual host reboot.
Test the virtual BMC to see if it's working. For example, to power on
the virtual machine do::
ipmitool -I lanplus -U admin -P password -H 127.0.0.1 -p 6230 power on
Enrolling virtual machines
--------------------------
In the undercloud, populate the ``instackenv.json`` with new "bare metals"
in a similar way to real bare metal machines (see :ref:`instackenv`) with
two exceptions:
* set ``pm_port`` to the port you specified when `Creating virtual BMC`
* populate ``mac`` field even if you plan to use introspection
For example:
.. code-block:: json
{
"nodes": [
{
"pm_type": "pxe_ipmitool",
"mac": [
"00:0a:f2:88:12:aa"
],
"pm_user": "admin",
"pm_password": "password",
"pm_addr": "172.16.0.1",
"pm_port": "6230",
"name": "compute-0"
}
]
}
Migrating from pxe_ssh to VirtualBMC
------------------------------------
If you already have a virtual cloud deployed and want to migrate from the
deprecated ``pxe_ssh`` driver to ``pxe_ipmitool`` using VirtualBMC,
follow `Creating virtual BMC`_, then update the existing nodes to change
their drivers and certain driver properties:
.. code-block:: bash
openstack baremetal node set $NODE_UUID_OR_NAME \
--driver pxe_ipmitool \
--driver-info ipmi_address=<IP address of the virthost> \
--driver-info ipmi_port=<Virtual BMC port> \
--driver-info ipmi_username="admin" \
--driver-info ipmi_password="password"
Then check that everything is populated properly:
.. code-block:: bash
openstack baremetal node validate $NODE_UUID_OR_NAME | grep power
.. _VirtualBMC: https://git.openstack.org/cgit/openstack/virtualbmc
.. _IPMI: https://en.wikipedia.org/wiki/Intelligent_Platform_Management_Interface
.. _BMC: https://en.wikipedia.org/wiki/Baseboard_management_controller

View File

@ -231,6 +231,21 @@ for details.
* Bugs: https://bugs.launchpad.net/ironic-inspector
* Blueprints: https://blueprints.launchpad.net/ironic-inspector
VirtualBMC
^^^^^^^^^^
A helper command to translate IPMI calls into libvirt calls. Used for testing
bare metal provisioning on virtual environments.
**How to contribute**
VirtualBMC uses `tox <https://tox.readthedocs.org/en/latest/>`_ to manage the
development environment in a similar way to Ironic.
**Useful links**
* Source: https://git.openstack.org/cgit/openstack/virtualbmc
* Bugs: https://bugs.launchpad.net/virtualbmc
Deployment & Orchestration