Clean up RAID documentation

* Use more copy-paste friendly indentation in the examples
* Use subheadings for properties
* Render JSON examples as JSON
* Remove explicit API version from CLI, we've been defaulting
  to latest for several releases.
* Small fixes

Change-Id: I1cae6e9b4ff124e3404bd55638bc77bdf3465fe0
This commit is contained in:
Dmitry Tantsur 2019-08-07 18:26:22 +02:00
parent 71b7441b78
commit 7a3d9a664e

View File

@ -60,13 +60,11 @@ as the key. The value for the ``logical_disks`` is a list of JSON
dictionaries. It looks like:: dictionaries. It looks like::
{ {
"logical_disks": [ "logical_disks": [
{<desired properties of logical disk 1>}, {<desired properties of logical disk 1>},
{<desired properties of logical disk 2>}, {<desired properties of logical disk 2>},
. ...
. ]
.
]
} }
If the ``target_raid_config`` is an empty dictionary, it unsets the value of If the ``target_raid_config`` is an empty dictionary, it unsets the value of
@ -76,76 +74,74 @@ done on the node.
Each dictionary of logical disk contains the desired properties of logical Each dictionary of logical disk contains the desired properties of logical
disk supported by the hardware type. These properties are discoverable by:: disk supported by the hardware type. These properties are discoverable by::
openstack baremetal --os-baremetal-api-version 1.15 driver raid property list <driver name> openstack baremetal driver raid property list <driver name>
The RAID feature is available in ironic API version 1.15 and above. Mandatory properties
If ``--os-baremetal-api-version`` is not used in the CLI, it will error out ^^^^^^^^^^^^^^^^^^^^
with the following message::
No API version was specified and the requested operation was not These properties must be specified for each logical
supported by the client's negotiated API version 1.9. Supported disk and have no default values:
version range is: 1.1 to ...
where the "..." in above error message would be the maximum version - ``size_gb`` - Size (Integer) of the logical disk to be created in GiB.
supported by the service. ``MAX`` may be specified if the logical disk should use all of the
remaining space available. This can be used only when backing physical
disks are specified (see below).
The RAID properties can be split into 4 different types: - ``raid_level`` - RAID level for the logical disk. Ironic supports the
following RAID levels: 0, 1, 2, 5, 6, 1+0, 5+0, 6+0.
#. Mandatory properties. These properties must be specified for each logical Optional properties
disk and have no default values. ^^^^^^^^^^^^^^^^^^^
- ``size_gb`` - Size (Integer) of the logical disk to be created in GiB. These properties have default values and they may be overridden in the
``MAX`` may be specified if the logical disk should use all of the specification of any logical disk.
remaining space available. This can be used only when backing physical
disks are specified (see below).
- ``raid_level`` - RAID level for the logical disk. Ironic supports the - ``volume_name`` - Name of the volume. Should be unique within the Node.
following RAID levels: 0, 1, 2, 5, 6, 1+0, 5+0, 6+0. If not specified, volume name will be auto-generated.
#. Optional properties. These properties have default values and - ``is_root_volume`` - Set to ``true`` if this is the root volume. At
they may be overridden in the specification of any logical disk. most one logical disk can have this set to ``true``; the other
logical disks must have this set to ``false``. The
``root device hint`` will be saved, if the RAID interface is capable of
retrieving it. This is ``false`` by default.
- ``volume_name`` - Name of the volume. Should be unique within the Node. Backing physical disk hints
If not specified, volume name will be auto-generated. ^^^^^^^^^^^^^^^^^^^^^^^^^^^
- ``is_root_volume`` - Set to ``true`` if this is the root volume. At These hints are specified for each logical disk to let Ironic find the desired
most one logical disk can have this set to ``true``; the other disks for RAID configuration. This is machine-independent information. This
logical disks must have this set to ``false``. The serves the use-case where the operator doesn't want to provide individual
``root device hint`` will be saved, if the RAID interface is capable of details for each bare metal node.
retrieving it. This is ``false`` by default.
#. Backing physical disk hints. These hints are specified for each logical - ``share_physical_disks`` - Set to ``true`` if this logical disk can
disk to let Ironic find the desired disks for RAID configuration. This is share physical disks with other logical disks. The default value is
machine-independent information. This serves the use-case where the ``false``.
operator doesn't want to provide individual details for each bare metal
node.
- ``share_physical_disks`` - Set to ``true`` if this logical disk can - ``disk_type`` - ``hdd`` or ``ssd``. If this is not specified, disk type
share physical disks with other logical disks. The default value is will not be a criterion to find backing physical disks.
``false``.
- ``disk_type`` - ``hdd`` or ``ssd``. If this is not specified, disk type - ``interface_type`` - ``sata`` or ``scsi`` or ``sas``. If this is not
will not be a criterion to find backing physical disks. specified, interface type will not be a criterion to
find backing physical disks.
- ``interface_type`` - ``sata`` or ``scsi`` or ``sas``. If this is not - ``number_of_physical_disks`` - Integer, number of disks to use for the
specified, interface type will not be a criterion to logical disk. Defaults to minimum number of disks required for the
find backing physical disks. particular RAID level.
- ``number_of_physical_disks`` - Integer, number of disks to use for the Backing physical disks
logical disk. Defaults to minimum number of disks required for the ^^^^^^^^^^^^^^^^^^^^^^
particular RAID level.
#. Backing physical disks. These are the actual machine-dependent These are the actual machine-dependent information. This is suitable for
information. This is suitable for environments where the operator wants environments where the operator wants to automate the selection of physical
to automate the selection of physical disks with a 3rd-party tool based disks with a 3rd-party tool based on a wider range of attributes
on a wider range of attributes (eg. S.M.A.R.T. status, physical location). (eg. S.M.A.R.T. status, physical location). The values for these properties
The values for these properties are hardware dependent. are hardware dependent.
- ``controller`` - The name of the controller as read by the RAID interface. - ``controller`` - The name of the controller as read by the RAID interface.
In order to trigger the setup of a Software RAID via the Ironic Python In order to trigger the setup of a Software RAID via the Ironic Python
Agent, the value of this property needs to be set to ``software``. Agent, the value of this property needs to be set to ``software``.
- ``physical_disks`` - A list of physical disks to use as read by the - ``physical_disks`` - A list of physical disks to use as read by the
RAID interface. RAID interface.
.. note:: .. note::
If properties from both "Backing physical disk hints" or If properties from both "Backing physical disk hints" or
@ -160,97 +156,106 @@ Examples for ``target_raid_config``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
*Example 1*. Single RAID disk of RAID level 5 with all of the space *Example 1*. Single RAID disk of RAID level 5 with all of the space
available. Make this the root volume to which Ironic deploys the image:: available. Make this the root volume to which Ironic deploys the image:
.. code-block:: json
{ {
"logical_disks": [ "logical_disks": [
{ {
"size_gb": "MAX", "size_gb": "MAX",
"raid_level": "5", "raid_level": "5",
"is_root_volume": true "is_root_volume": true
} }
] ]
} }
*Example 2*. Two RAID disks. One with RAID level 5 of 100 GiB and make it *Example 2*. Two RAID disks. One with RAID level 5 of 100 GiB and make it
root volume and use SSD. Another with RAID level 1 of 500 GiB and use root volume and use SSD. Another with RAID level 1 of 500 GiB and use
HDD:: HDD:
.. code-block:: json
{ {
"logical_disks": [ "logical_disks": [
{ {
"size_gb": 100, "size_gb": 100,
"raid_level": "5", "raid_level": "5",
"is_root_volume": true, "is_root_volume": true,
"disk_type": "ssd" "disk_type": "ssd"
}, },
{ {
"size_gb": 500, "size_gb": 500,
"raid_level": "1", "raid_level": "1",
"disk_type": "hdd" "disk_type": "hdd"
} }
] ]
} }
*Example 3*. Single RAID disk. I know which disks and controller to use:: *Example 3*. Single RAID disk. I know which disks and controller to use:
.. code-block:: json
{ {
"logical_disks": [ "logical_disks": [
{ {
"size_gb": 100, "size_gb": 100,
"raid_level": "5", "raid_level": "5",
"controller": "Smart Array P822 in Slot 3", "controller": "Smart Array P822 in Slot 3",
"physical_disks": ["6I:1:5", "6I:1:6", "6I:1:7"], "physical_disks": ["6I:1:5", "6I:1:6", "6I:1:7"],
"is_root_volume": true "is_root_volume": true
} }
] ]
} }
*Example 4*. Using backing physical disks:: *Example 4*. Using backing physical disks:
.. code-block:: json
{ {
"logical_disks": "logical_disks": [
[ {
{ "size_gb": 50,
"size_gb": 50, "raid_level": "1+0",
"raid_level": "1+0", "controller": "RAID.Integrated.1-1",
"controller": "RAID.Integrated.1-1", "volume_name": "root_volume",
"volume_name": "root_volume", "is_root_volume": true,
"is_root_volume": true, "physical_disks": [
"physical_disks": [ "Disk.Bay.0:Encl.Int.0-1:RAID.Integrated.1-1",
"Disk.Bay.0:Encl.Int.0-1:RAID.Integrated.1-1", "Disk.Bay.1:Encl.Int.0-1:RAID.Integrated.1-1"
"Disk.Bay.1:Encl.Int.0-1:RAID.Integrated.1-1" ]
] },
}, {
{ "size_gb": 100,
"size_gb": 100, "raid_level": "5",
"raid_level": "5", "controller": "RAID.Integrated.1-1",
"controller": "RAID.Integrated.1-1", "volume_name": "data_volume",
"volume_name": "data_volume", "physical_disks": [
"physical_disks": [ "Disk.Bay.2:Encl.Int.0-1:RAID.Integrated.1-1",
"Disk.Bay.2:Encl.Int.0-1:RAID.Integrated.1-1", "Disk.Bay.3:Encl.Int.0-1:RAID.Integrated.1-1",
"Disk.Bay.3:Encl.Int.0-1:RAID.Integrated.1-1", "Disk.Bay.4:Encl.Int.0-1:RAID.Integrated.1-1"
"Disk.Bay.4:Encl.Int.0-1:RAID.Integrated.1-1" ]
] }
} ]
]
} }
*Example 5*. Software RAID with two RAID devices:: *Example 5*. Software RAID with two RAID devices:
.. code-block:: json
{ {
"logical_disks": [ "logical_disks": [
{ {
"size_gb": 100, "size_gb": 100,
"raid_level": "1", "raid_level": "1",
"controller": "software" "controller": "software"
}, },
{ {
"size_gb": "MAX", "size_gb": "MAX",
"raid_level": "0", "raid_level": "0",
"controller": "software" "controller": "software"
} }
] ]
} }
Current RAID configuration Current RAID configuration
@ -265,7 +270,7 @@ physical disk found on the bare metal node.
To get the current RAID configuration:: To get the current RAID configuration::
openstack baremetal --os-baremetal-api-version 1.15 node show <node-uuid-or-name> openstack baremetal node show <node-uuid-or-name>
Workflow Workflow
======== ========
@ -286,14 +291,14 @@ Workflow
openstack baremetal node set <node-uuid-or-name> \ openstack baremetal node set <node-uuid-or-name> \
--target-raid-config <JSON file containing target RAID configuration> --target-raid-config <JSON file containing target RAID configuration>
The CLI command can accept the input from standard input also: The CLI command can accept the input from standard input also::
openstack baremetal node set <node-uuid-or-name> \ openstack baremetal node set <node-uuid-or-name> \
--target-raid-config - --target-raid-config -
* Create a JSON file with the RAID clean steps for manual cleaning. Add other * Create a JSON file with the RAID clean steps for manual cleaning. Add other
clean steps as desired:: clean steps as desired::
[{ [{
"interface": "raid", "interface": "raid",
"step": "delete_configuration" "step": "delete_configuration"
@ -347,8 +352,20 @@ There are certain limitations to be aware of:
in case of a disk failure. in case of a disk failure.
* Building RAID will fail if the target disks are already partitioned. Wipe the * Building RAID will fail if the target disks are already partitioned. Wipe the
disks using e.g. the ``erase_device_metadata`` clean step before building disks using e.g. the ``erase_devices_metadata`` clean step before building
RAID. RAID::
[{
"interface": "raid",
"step": "delete_configuration"
},
{
"interface": "deploy",
"step": "erase_devices_metadata"
{
"interface": "raid",
"step": "create_configuration"
}]
* If local boot is going to be used, the final instance image must have the * If local boot is going to be used, the final instance image must have the
``mdadm`` utility installed and needs to be able to detect software RAID ``mdadm`` utility installed and needs to be able to detect software RAID
@ -367,7 +384,7 @@ Using RAID in nova flavor for scheduling
The operator can specify the `raid_level` capability in nova flavor for node to be selected The operator can specify the `raid_level` capability in nova flavor for node to be selected
for scheduling:: for scheduling::
nova flavor-key my-baremetal-flavor set capabilities:raid_level="1+0" openstack flavor set my-baremetal-flavor --property capabilities:raid_level="1+0"
Developer documentation Developer documentation
======================= =======================