Merge "doc/source/admin fixes part-3"

This commit is contained in:
Zuul 2025-01-31 12:25:13 +00:00 committed by Gerrit Code Review
commit 51d1acebb5
6 changed files with 28 additions and 28 deletions

View File

@ -4,7 +4,7 @@ Layer 3 or DHCP-less ramdisk booting
Booting nodes via PXE, while universally supported, suffers from one
disadvantage: it requires a direct L2 connectivity between the node and the
control plane for DHCP. Using virtual media it is possible to avoid not only
the unreliable TFTP protocol, but DHCP altogether.
the unreliable TFTP protocol but DHCP altogether.
When network data is provided for a node as explained below, the generated
virtual media ISO will also serve as a configdrive_, and the network data will
@ -42,8 +42,8 @@ When the Bare Metal service is running within OpenStack, no additional
configuration is required - the network configuration will be fetched from the
Network service.
Alternatively, the user can build and pass network configuration in form of
a network_data_ JSON to a node via the ``network_data`` field. Node-based
Alternatively, the user can build and pass network configuration in the form
of a network_data_ JSON to a node via the ``network_data`` field. Node-based
configuration takes precedence over the configuration generated by the
Network service and also works in standalone mode.
@ -79,7 +79,7 @@ An example network data:
.. note::
Some fields are redundant with the port information. We're looking into
simplifying the format, but currently all these fields are mandatory.
simplifying the format, but currently, all these fields are mandatory.
You'll need the deployed image to support network data, e.g. by pre-installing
cloud-init_ or Glean_ on it (most cloud images have the former). Then you can

View File

@ -1,6 +1,6 @@
==========================================================
Drivers, Hardware Types and Hardware Interfaces for Ironic
==========================================================
===========================================================
Drivers, Hardware Types, and Hardware Interfaces for Ironic
===========================================================
Generic Interfaces
------------------
@ -40,7 +40,7 @@ Any hardware interfaces can be specified on enrollment as well::
baremetal node create --driver <hardware type> \
--deploy-interface direct --<other>-interface <other implementation>
For the remaining interfaces the default value is assigned as described in
For the remaining interfaces, the default value is assigned as described in
:ref:`hardware_interfaces_defaults`. Both the hardware type and the hardware
interfaces can be changed later via the node update API.
@ -69,7 +69,7 @@ not work::
This is because the ``fake-hardware`` hardware type defaults to ``fake``
implementations for some or all interfaces, but the ``ipmi`` hardware type is
not compatible with them. There are three ways to deal with this situation:
incompatible with them. There are three ways to deal with this situation:
#. Provide new values for all incompatible interfaces, for example::
@ -114,7 +114,7 @@ implementation with the ``ipmi`` and ``redfish`` hardware types. In this case
the Bare Metal service will not change the boot device for you, leaving
the pre-configured boot order.
For example, in case of the :ref:`pxe-boot`:
For example, in the case of the :ref:`pxe-boot`:
#. Via any available means configure the boot order on the node as follows:
@ -124,7 +124,7 @@ For example, in case of the :ref:`pxe-boot`:
If it is not possible to limit network boot to only provisioning NIC,
make sure that no other DHCP/PXE servers are accessible by the node.
#. Boot from hard drive.
#. Boot from the hard drive.
#. Make sure the ``noop`` management interface is enabled, for example:

View File

@ -8,11 +8,11 @@ It is first booted during in-band inspection or cleaning (whatever happens
first) and is only shut down before rebooting into the final instance.
Depending on the configuration, this mode can save several reboots and is
particularly useful for scenarios where nodes are enrolled, prepared and
provisioned within a short period of time.
provisioned within a short time.
.. warning::
Fast track deployment targets standalone use cases and is only tested with
the ``noop`` networking. The case where inspection, cleaning and
the ``noop`` networking. The case where inspection, cleaning, and
provisioning networks are different is not supported.
.. note::
@ -20,7 +20,7 @@ provisioned within a short period of time.
side that may prevent agent heartbeats from being registered.
For example, converting a large image to the raw format may take long enough
to reach the fast track timeout. In this case, you can either :ref:`use raw
to reach the fast-track timeout. In this case, you can either :ref:`use raw
images <stream_raw_images>` or move the conversion to the agent side with:
.. code-block:: ini

View File

@ -61,7 +61,7 @@ The different attributes of the ``update_firmware`` cleaning step are as follows
"``args``", "Keyword-argument entry (<name>: <value>) being passed to the step"
"``args.firmware_images``", "Ordered list of dictionaries of firmware images to be applied"
Each firmware image dictionary, is of the form::
Each firmware image dictionary is of the form::
{
"url": "<URL of firmware image file>",
@ -101,8 +101,8 @@ Next, construct the JSON for the firmware update cleaning step to be executed.
When launching the firmware update, the JSON may be specified on the command
line directly or in a file. The following example shows one cleaning step that
installs four firmware updates. All except 3rd entry that has explicit
``source`` added, uses setting from :oslo.config:option:`redfish.firmware_source` to determine
if and where to stage the files:
``source`` added, uses the setting from :oslo.config:option:`redfish.firmware_source`
to determine if and where to stage the files:
.. code-block:: json
@ -195,7 +195,7 @@ The different attributes of the ``update`` step are as follows:
"``args``", "Keyword-argument entry (<name>: <value>) being passed to the step"
"``args.settings``", "Ordered list of dictionaries of firmware updates to be applied"
Each firmware image dictionary, is of the form::
Each firmware image dictionary is of the form::
{
"component": "The desired component to have the firmware updated, only bios and bmc are currently supported",
@ -280,7 +280,7 @@ In the following example, the JSON is specified directly on the command line:
'[{"interface": "firmware", "step": "update", "args": {"settings":[{"component": "bios", "url":"http://192.168.0.8:8080/bios.bin"}]}}]'
.. note::
For Dell machines you must extract the firmimgFIT.d9 from the iDRAC.exe
For Dell machines, you must extract the firmimgFIT.d9 from the iDRAC.exe
This can be done using the command ``7za e iDRAC_<VERSION>.exe``.
.. note::

View File

@ -6,10 +6,10 @@ administrators can generate a report about the state of running Bare Metal
executables (ironic-api and ironic-conductor). This report is called a Guru
Meditation Report (GMR for short).
GMR provides useful debugging information that can be used to obtain
an accurate view on the current live state of the system. For example,
an accurate view of the current live state of the system. For example,
what threads are running, what configuration parameters are in effect,
and more. The eventlet backdoor facility provides an interactive shell
interface for any eventlet based process, allowing an administrator to
interface for any eventlet-based process, allowing an administrator to
telnet to a pre-defined port and execute a variety of commands.
Configuration

View File

@ -39,7 +39,7 @@ stress-ng option schema, are:
to limit the overall runtime and to pick the number of CPUs to stress.
For instance, in order to limit the time of the CPU burn-in to 10 minutes
For instance, to limit the time of the CPU burn-in to 10 minutes
do:
.. code-block:: console
@ -65,7 +65,7 @@ stress-ng option schema, are:
to limit the overall runtime and to set the fraction of RAM to stress.
For instance, in order to limit the time of the memory burn-in to 1 hour
For instance, to limit the time of the memory burn-in to 1 hour
and the amount of RAM to be used to 75% run:
.. code-block:: console
@ -94,7 +94,7 @@ fio option schema, are:
to set the time limit and the number of iterations when going
over the disks.
For instance, in order to limit the number of loops to 2 set:
For instance, to limit the number of loops to 2 set:
.. code-block:: console
@ -108,7 +108,7 @@ Then launch the test with:
baremetal node clean --clean-steps '[{"step": "burnin_disk", \
"interface": "deploy"}]' $NODE_NAME_OR_UUID
In order to launch a parallel SMART self test on all devices after the
To launch a parallel SMART self-test on all devices after the
disk burn-in (which will fail the step if any of the tests fail), set:
.. code-block:: console
@ -119,7 +119,7 @@ disk burn-in (which will fail the step if any of the tests fail), set:
Network burn-in
===============
Burning in the network needs a little more config, since we need a pair
Burning in the network needs a little more config since we need a pair
of nodes to perform the test. This pairing can be done either in a static
way, i.e. pairs are defined upfront, or dynamically via a distributed
coordination backend which orchestrates the pair matching. While the
@ -145,7 +145,7 @@ hostname of the other node to test), like:
Dynamic network burn-in configuration
-------------------------------------
In order to use dynamic pair matching, a coordination backend is used
To use dynamic pair matching, a coordination backend is used
via `tooz <https://docs.openstack.org/tooz/latest/>`_. The corresponding
backend URL then needs to be added to the node, e.g. for a Zookeeper
backend it would look similar to:
@ -209,7 +209,7 @@ performance of the stressed components, keeping this information for
verification or acceptance purposes may be desirable. By default, the
output of the burn-in tools goes to the journal of the Ironic Python
Agent and is therefore sent back as an archive to the conductor. In order
to consume the output of the burn-in steps more easily, or even in real-time,
to consume the output of the burn-in steps more easily, or even in real time,
the nodes can be configured to store the output of the individual steps to
files in the ramdisk (from where they can be picked up by a logging pipeline).