Minor docs changes for better readability and consistency

Change-Id: I43a281231da6f254528c52961805ceef23358aa1
This commit is contained in:
Mahnoor Asghar 2024-10-16 05:52:40 -04:00 committed by Riccardo Pittau
parent a94f595c94
commit bcd694e90c
10 changed files with 221 additions and 236 deletions

View File

@ -2,25 +2,24 @@
Redfish development tools
=========================
This is a set of simple simulation tools aiming at supporting the
development and testing of the Redfish protocol implementations and,
in particular, Sushy library (https://docs.openstack.org/sushy/). It
is not designed for use outside of development and testing environments.
Please do not run sushy-tools in a production environment of any kind.
This is a set of simple simulation tools aimed at supporting the development and
testing of the Redfish protocol implementations and, in particular, the Sushy
library (https://docs.openstack.org/sushy/). It is not designed for use outside
of development and testing environments. Please do not run sushy-tools in a
production environment of any kind.
The package ships two simulators - static Redfish responder and
virtual Redfish BMC that is backed by libvirt or OpenStack cloud.
The package ships two simulators - the static Redfish responder and the virtual
Redfish BMC (which is backed by libvirt or OpenStack cloud).
The static Redfish responder is a simple REST API server which
responds the same things to client queries. It is effectively
read-only.
The static Redfish responder is a simple REST API server which responds with the
same things to client queries. It is effectively read-only.
The virtual Redfish BMC resembles the real Redfish-controlled bare-metal
machine to some extent. Some client queries are translated to commands that
actually control VM instances simulating bare metal hardware. However some
of the Redfish commands just return static content never touching the
virtualization backend and, for that matter, virtual Redfish BMC is similar
to the static Redfish responder.
The virtual Redfish BMC resembles the real Redfish-controlled bare metal machine
to some extent. Some client queries are translated to commands that actually
control VM instances simulating bare metal hardware. However, some of the
Redfish commands just return static content, never touching the virtualization
backend and in this regard, the virtual Redfish BMC is similar to the static
Redfish responder.
* Free software: Apache license
* Documentation: https://docs.openstack.org/sushy-tools

View File

@ -1,4 +1,4 @@
# sushy emulator configuration file build on top of Flask application
# sushy emulator configuration file built on top of Flask application
# configuration infrastructure: http://flask.pocoo.org/docs/config/
# Listen on all local IP interfaces
@ -25,18 +25,16 @@ SUSHY_EMULATOR_IRONIC_CLOUD = None
# The libvirt URI to use. This option enables libvirt driver.
SUSHY_EMULATOR_LIBVIRT_URI = u'qemu:///system'
# Instruct the libvirt driver to ignore any instructions to
# set the boot device. Allowing the UEFI firmware to instead
# rely on the EFI Boot Manager
# Note: This sets the legacy boot element to dev="fd"
# and relies on the floppy not existing, it likely won't work
# your VM has a floppy drive.
# Instruct the libvirt driver to ignore any instructions to set the boot device,
# allowing the UEFI firmware to instead rely on the EFI Boot Manager.
# Note: This sets the legacy boot element to dev="fd" and relies on the floppy
# not existing. It likely won't work if your VM has a floppy drive.
SUSHY_EMULATOR_IGNORE_BOOT_DEVICE = False
# The map of firmware loaders dependent on the boot mode and
# system architecture. Ideally the x86_64 loader will be capable
# of secure boot or not based on the chosen nvram.
# The map of firmware loaders dependent on the boot mode and system
# architecture. Ideally the x86_64 loader will be capable of secure boot or not
# based on the chosen nvram.
SUSHY_EMULATOR_BOOT_LOADER_MAP = {
'UEFI': {
'x86_64': u'/usr/share/OVMF/OVMF_CODE.secboot.fd',
@ -52,14 +50,13 @@ SUSHY_EMULATOR_BOOT_LOADER_MAP = {
SUSHY_EMULATOR_SECURE_BOOT_ENABLED_NVRAM = '/usr/share/OVMF/OVMF_VARS.secboot.fd'
SUSHY_EMULATOR_SECURE_BOOT_DISABLED_NVRAM = '/usr/share/OVMF/OVMF_VARS.fd'
# This map contains statically configured Redfish Chassis linked
# up with the Systems and Managers enclosed into this Chassis.
# This map contains statically configured Redfish Chassis linked up with the
# Systems and Managers enclosed into this Chassis.
#
# The first chassis in the list will contain all other resources.
#
# If this map is not present in the configuration, a single default
# Chassis is configured automatically to enclose all available Systems
# and Managers.
# If this map is not present in the configuration, a single default Chassis is
# configured automatically to enclose all available Systems and Managers.
SUSHY_EMULATOR_CHASSIS = [
{
u'Id': u'Chassis',
@ -68,23 +65,22 @@ SUSHY_EMULATOR_CHASSIS = [
}
]
# This map contains statically configured Redfish IndicatorLED
# resource state ('Lit', 'Off', 'Blinking') keyed by UUIDs of
# System and Chassis resources.
# This map contains statically configured Redfish IndicatorLED resource state
# ('Lit', 'Off', 'Blinking'), keyed by UUIDs of System and Chassis resources.
#
# If this map is not present in the configuration, each
# System and Chassis will have their IndicatorLED `Lit` by default.
# If this map is not present in the configuration, each System and Chassis will
# have their IndicatorLED `Lit` by default.
#
# Redfish client can change IndicatorLED state. The new state
# is volatile, i.e. it's maintained in process memory.
# The Redfish client can change IndicatorLED state. The new state is volatile,
# i.e. it's maintained in process memory.
SUSHY_EMULATOR_INDICATOR_LEDS = {
# u'48295861-2522-3561-6729-621118518810': u'Blinking'
}
# This map contains statically configured virtual media resources.
# These devices ('Cd', 'Floppy', 'USBStick') will be exposed by the
# Manager(s) and possibly used by the System(s) if system emulation
# backend supports boot image configuration.
# These devices ('Cd', 'Floppy', 'USBStick') will be exposed by the Manager(s)
# and possibly used by the System(s) if system emulation backend supports boot
# image configuration.
#
# This value is ignored by the OpenStack driver, which only supports the 'Cd'
# device. If this map is not present in the configuration, the following
@ -106,12 +102,12 @@ SUSHY_EMULATOR_VMEDIA_DEVICES = {
}
}
# Instruct the virtual media insertion not to verify the SSL certificate
# when retrieving the image.
# Instruct the virtual media insertion to not verify the SSL certificate when
# retrieving the image.
SUSHY_EMULATOR_VMEDIA_VERIFY_SSL = False
# This map contains statically configured Redfish Storage resource linked
# up with the Systems resource, keyed by the UUIDs of the Systems.
# This map contains statically configured Redfish Storage resources linked up
# with the Systems resources, keyed by the UUIDs of the Systems.
SUSHY_EMULATOR_STORAGE = {
"da69abcc-dae0-4913-9a7b-d344043097c0": [
{
@ -131,10 +127,10 @@ SUSHY_EMULATOR_STORAGE = {
]
}
# This map contains statically configured Redfish Drives resource. The Drive
# This map contains statically configured Redfish Drives resources. The Drive
# objects are keyed in a composite fashion using a tuple of the form
# (System_UUID, Storage_ID) referring to the UUID of the System and Id of the
# Storage resource, respectively, to which the drive belongs.
# Storage resource, respectively, to which the Drive belongs.
SUSHY_EMULATOR_DRIVES = {
("da69abcc-dae0-4913-9a7b-d344043097c0", "1"): [
{
@ -146,20 +142,20 @@ SUSHY_EMULATOR_DRIVES = {
]
}
# This map contains dynamically configured Redfish Volume resource backed
# by the libvirt virtualization backend of the dynamic Redfish emulator.
# The Volume objects are keyed in a composite fashion using a tuple of the
# form (System_UUID, Storage_ID) referring to the UUID of the System and ID
# of the Storage resource, respectively, to which the Volume belongs.
# This map contains dynamically configured Redfish Volume resources backed by
# the libvirt virtualization backend of the dynamic Redfish emulator.
# The Volume objects are keyed in a composite fashion using a tuple of the form
# (System_UUID, Storage_ID) referring to the UUID of the System and ID of the
# Storage resource, respectively, to which the Volume belongs.
#
# Only the volumes specified in the map or created via a POST request are
# Only the Volumes specified in the map or created via a POST request are
# allowed to be emulated upon by the emulator. Volumes other than these can
# neither be listed nor deleted.
#
# The Volumes from map missing in the libvirt backend will be created
# The Volumes in the map missing from the libvirt backend will be created
# dynamically in the pool name specified (provided the pool exists in the
# backend). If the pool name is not specified, the volume will be created
# automatically in pool named 'default'.
# backend). If the pool name is not specified, the Volume will be created
# automatically in a pool named 'default'.
SUSHY_EMULATOR_VOLUMES = {
('da69abcc-dae0-4913-9a7b-d344043097c0', '1'): [
{

View File

@ -2,15 +2,14 @@
Configuring emulators
=====================
Running emulators in background
-------------------------------
Running emulators in the background
-----------------------------------
The emulators run as interactive processes attached to the
terminal by default. You can create a systemd service to run the
emulators in background.
For each emulator create a systemd unit file and
update full path to ``sushy-static`` or ``sushy-emulator`` binary and
adjust arguments as necessary, for example::
The emulators run as interactive processes attached to the terminal by default.
You can create systemd services to run the emulators in the background.
For each emulator, create a systemd unit file, and update ``<full-path>`` to the
``sushy-static`` or ``sushy-emulator`` binary, and adjust the arguments as
necessary, for example::
[Unit]
Description=Sushy Libvirt emulator
@ -22,9 +21,9 @@ adjust arguments as necessary, for example::
StandardOutput=syslog
StandardError=syslog
If you want to run emulators with different configuration, for example,
``sushy-static`` emulator with different mockup files, then create a new
systemd unit file.
If you want to run the emulators with different configurations, for example, the
``sushy-static`` emulator with different mockup files, then create a new systemd
unit file.
You can also use ``gunicorn`` to run ``sushy-emulator``, for example::
@ -34,15 +33,15 @@ Using configuration file
------------------------
Besides command-line options, `sushy-emulator` can be configured via a
configuration file. The tool uses Flask application
`configuration infrastructure <http://flask.pocoo.org/docs/config/>`_,
emulator-specific configuration options are prefixed with **SUSHY_EMULATOR_**
to make sure they won't collide with Flask's own configuration options.
configuration file. This tool uses the Flask application
`configuration infrastructure <http://flask.pocoo.org/docs/config/>`_.
Emulator-specific configuration options are prefixed with **SUSHY_EMULATOR_**
to make sure that they don't collide with Flask's own configuration options.
The configuration file itself can be specified through the
`SUSHY_EMULATOR_CONFIG` environment variable.
The full list of supported options and their meanings could be found in
the sample configuration file:
The full list of supported options and their meanings can be found in the sample
configuration file:
.. literalinclude:: emulator.conf

View File

@ -9,7 +9,7 @@ Contributing
Cloning the sushy-tools repository
++++++++++++++++++++++++++++++++++
If you haven't already, sushy-tools source code should be pulled directly
If you haven't already, the sushy-tools source code should be pulled directly
from git.
.. code-block:: bash

View File

@ -16,17 +16,17 @@ Or, if you have virtualenvwrapper installed:
$ mkvirtualenv sushy-tools
$ pip install sushy-tools
The *Virtual Redfish BMC* tool relies upon one or more hypervisors to mimic
bare metal nodes. Depending on the virtualization backend you are planning
to use, certain third-party dependencies should also be installed.
The *Virtual Redfish BMC* tool relies upon one or more hypervisors to mimic bare
metal nodes. Depending on the virtualization backend you are planning to use,
certain third-party dependencies should also be installed.
The dependencies for the virtualization backends that should be installed
for the corresponding drivers to become operational are:
The dependencies for the virtualization backends that should be installed for
the corresponding drivers to become operational are:
* `libvirt-python` for the libvirt driver
* `openstacksdk` for the nova driver
.. note::
The dependencies for at least one virtualization backend should be
satisfied to have *Virtual Redfish BMC* emulator operational.
The dependencies for at least one virtualization backend should be satisfied
to have the *Virtual Redfish BMC* emulator operational.

View File

@ -2,11 +2,10 @@
Virtual Redfish BMC
===================
The Virtual Redfish BMC is functionally similar to the
`Virtual BMC <https://opendev.org/openstack/virtualbmc>`_ tool
except that the frontend protocol is Redfish rather than IPMI. The Redfish
commands coming from the client are handled by one or more resource-specific
drivers.
The Virtual Redfish BMC emulator is functionally similar to the
`Virtual BMC <https://opendev.org/openstack/virtualbmc>`_ tool, except that the
frontend protocol is Redfish rather than IPMI. The Redfish commands coming from
the client are handled by one or more resource-specific drivers.
Feature sets
------------
@ -24,20 +23,19 @@ Supported feature sets are:
Systems resource
----------------
For *Systems* resource, emulator maintains two drivers relying on
a virtualization backend to emulate bare metal machines by means of
virtual machines. In addition, there is a fake driver used to mock
bare metal machines.
For the *Systems* resource, the emulator maintains two drivers relying on a
virtualization backend to emulate bare metal machines by means of virtual
machines. In addition, there is a fake driver used to mock bare metal machines.
The following sections will explain how to configure and use
each of these drivers.
The following sections will explain how to configure and use each of these
drivers.
Systems resource driver: libvirt
++++++++++++++++++++++++++++++++
First thing you need is to set up some libvirt-managed virtual machines
(AKA domains) to manipulate. The following command will create a new
virtual machine i.e. libvirt domain `vbmc-node`:
The first thing you need is to set up some libvirt-managed virtual machines (AKA
domains) to manipulate. The following command will create a new virtual machine
i.e. libvirt domain `vbmc-node`:
.. code-block:: bash
@ -62,8 +60,7 @@ Next you can fire up the Redfish virtual BMC which will listen at
sushy-emulator
* Running on http://localhost:8000/ (Press CTRL+C to quit)
Now you should be able to see your libvirt domain among the Redfish
*Systems*:
Now you should be able to see your libvirt domain among the Redfish *Systems*:
.. code-block:: bash
@ -158,15 +155,15 @@ Redfish Simple Storage Controllers available for this domain):
UEFI boot
~~~~~~~~~
By default, `legacy` or `BIOS` mode is used to boot the instance. However,
By default, `legacy` or `BIOS` mode is used to boot the instance. However, the
libvirt domain can be configured to boot via UEFI firmware. This process
requires additional preparation on the host side.
On the host you need to have OVMF firmware binaries installed. Fedora users
could pull them as `edk2-ovmf` RPM. On Ubuntu, `apt-get install ovmf` should
do the job.
could pull them as `edk2-ovmf` RPM. On Ubuntu, `apt-get install ovmf` should do
the job.
Then you need to create a VM by running `virt-install` with the UEFI specific
Then you need to create a VM by running `virt-install` with the UEFI-specific
`--boot` options:
Example:
@ -191,8 +188,9 @@ Example:
virsh define --file $tmpfile
rm $tmpfile
This will create a new `libvirt` domain with path to OVMF images properly
configured. Let's take a note on the path to the blob by running `virsh dumpxml vbmc-node`:
This will create a new `libvirt` domain with the path to OVMF images properly
configured. Let's take a note on the path to the blob by running
`virsh dumpxml vbmc-node`:
Example:
@ -209,10 +207,10 @@ Example:
...
</domain>
Because now we need to add this path to emulator's configuration matching VM
architecture we are running. It is also possible to make Redfish calls to enable
or disable Secure Boot by specifying which nvram template to load in each case.
Make a copy of stock configuration file and edit it accordingly:
Because now we need to add this path to the emulator's configuration matching
the VM architecture we are running. It is also possible to make Redfish calls to
enable or disable Secure Boot by specifying which nvram template to load in each
case. Make a copy of the stock configuration file and edit it accordingly:
.. code-block:: bash
@ -240,9 +238,8 @@ Now you can run `sushy-emulator` with the updated configuration file:
Settable boot image
~~~~~~~~~~~~~~~~~~~
The `libvirt` system emulation backend supports setting custom boot images,
so that libvirt domains (representing bare metal nodes) can boot from user
images.
The `libvirt` system emulation backend supports setting custom boot images, so
that libvirt domains (representing bare metal nodes) can boot from user images.
This feature enables system boot from virtual media device.
@ -256,21 +253,21 @@ virtual media boot.
Systems resource driver: OpenStack
++++++++++++++++++++++++++++++++++
You can use an OpenStack cloud instances to simulate Redfish-managed
baremetal machines. This setup is known under the name of
You can use OpenStack cloud instances to simulate Redfish-managed bare metal
machines. This setup is known under the name of
`OpenStack Virtual Baremetal <http://openstack-virtual-baremetal.readthedocs.io/en/latest/>`_.
We will largely reuse its OpenStack infrastructure and configuration
instructions. After all, what we are trying to do here is to set up the
Redfish emulator alongside the
instructions. After all, what we are trying to do here is to set up the Redfish
emulator alongside the
`openstackbmc <https://github.com/cybertron/openstack-virtual-baremetal/blob/master/openstack_virtual_baremetal/openstackbmc.py>`_
tool which is used for exactly the same purpose at OVB with the only
difference that it works over the *IPMI* protocol as opposed to *Redfish*.
tool which is used for exactly the same purpose at OVB with the only difference
being that it works over the *IPMI* protocol as opposed to *Redfish*.
The easiest way is probably to set up your OpenStack Virtual Baremetal cloud
by following
The easiest way is probably to set up your OpenStack Virtual Baremetal cloud by
following
`its instructions <http://openstack-virtual-baremetal.readthedocs.io/en/latest/>`_.
Once your OVB cloud operational, you log into the *BMC* instance and
Once your OVB cloud is operational, you log into the *BMC* instance and
:ref:`set up sushy-tools <installation>` there.
Next you can invoke the Redfish virtual BMC pointing it to your OVB cloud:
@ -302,7 +299,8 @@ Redfish *Systems*:
"@Redfish.Copyright": "Copyright 2014-2016 Distributed Management Task Force, Inc. (DMTF). For the full DMTF copyright policy, see http://www.dmtf.org/about/policies/copyright."
}
And flip its power state via the Redfish call:
And flip an instance's power state via the Redfish call:
.. code-block:: bash
@ -321,12 +319,12 @@ Systems resource driver: Ironic
++++++++++++++++++++++++++++++++++
You can use the Ironic driver to manage Ironic baremetal instance to simulated
Redfish API. You may want to do this if you require a redfish compatible endpoint
but don't have direct access to the BMC (you only have access via Ironic) or
the BMC doesn't support redfish.
Redfish API. You may want to do this if you require a redfish-compatible
endpoint but don't have direct access to the BMC (you only have access via
Ironic) or the BMC doesn't support redfish.
Assuming your bare metal cloud is set up you can invoke the Redfish emulator by
running
running:
.. code-block:: bash
@ -355,7 +353,7 @@ Redfish *Systems*:
"@Redfish.Copyright": "Copyright 2014-2016 Distributed Management Task Force, Inc. (DMTF). For the full DMTF copyright policy, see http://www.dmtf.org/about/policies/copyright."
}
And flip its power state via the Redfish call:
And flip an instance's power state via the Redfish call:
.. code-block:: bash
@ -382,12 +380,12 @@ Or update their boot device:
Systems resource driver: fake
+++++++++++++++++++++++++++++
The ``fake`` system driver is designed to conduct large-scale testing of
Ironic without having a lot of bare-metal machines or being able to create a
large number of virtual machines. When the Redfish emulator is configured with
the ``fake`` system backend, all operations just return success. Any
modifications are done purely in the local cache. This way, many Ironic
operations can be tested at scale without access to a large computing pool.
The ``fake`` system driver is designed to conduct large-scale testing of Ironic
without having a lot of bare metal machines or being able to create a large
number of virtual machines. When the Redfish emulator is configured with the
``fake`` system backend, all operations just return success. Any modifications
are done purely in the local cache. This way, many Ironic operations can be
tested at scale without access to a large computing pool.
System status notifications
~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -396,11 +394,11 @@ The ``fake`` driver may need to simulate components that run on the VMs to test
an end-to-end deployment. This requires a hook interface to integrate external
components. For instance, when testing Ironic scalability, Ironic needs to
communicate with the Ironic Python Agent (IPA). A fake IPA can be implemented
and synchronized with the VM status using this hook, which notifies the fake
IPA whenever the VM status changes.
and synchronized with the VM status using this hook, which notifies the fake IPA
whenever the VM status changes.
To enable notifications, set ``external_notifier`` to ``True`` in the fake system
object:
To enable notifications, set ``external_notifier`` to ``True`` in the fake
System object:
.. code-block:: python
@ -417,10 +415,9 @@ object:
]
}
After this, whenever the fake driver updates this system object, it will send
an HTTP ``PUT`` request with the new system object as ``JSON`` data. The
endpoint URL can be configured with the parameter
``EXTERNAL_NOTIFICATION_URL``.
After this, whenever the fake driver updates this System object, it will send an
HTTP ``PUT`` request with the new system object as ``JSON`` data. The endpoint
URL can be configured with the parameter ``EXTERNAL_NOTIFICATION_URL``.
Filtering by allowed instances
++++++++++++++++++++++++++++++
@ -447,13 +444,12 @@ UUIDs which are allowed.
Managers resource
-----------------
*Managers* are emulated based on systems: each *System* has a *Manager* with
the same UUID. The first (alphabetically) manager will pretend to manage all
*Managers* are emulated based on Systems: each *System* has a *Manager* with the
same UUID. The first manager (alphabetically) will pretend to manage all
*Chassis* and potentially other resources.
Managers will be revealed when querying the *Managers* resource
directly, as well as other resources they manage or have some
other relations.
Managers will be revealed when querying the *Managers* resource directly, as
well as other resources they manage or have some other relations.
.. code-block:: bash
@ -476,9 +472,9 @@ other relations.
Chassis resource
----------------
For emulating *Chassis* resource, the user can statically configure
one or more imaginary chassis. All existing resources (e.g. *Systems*,
*Managers*, *Drives*) will pretend to reside in the first chassis.
For emulating the *Chassis* resource, the user can statically configure one or
more imaginary chassis. All existing resources (e.g. *Systems*, *Managers*,
*Drives*) will pretend to reside in the first chassis.
.. code-block:: python
@ -492,9 +488,8 @@ one or more imaginary chassis. All existing resources (e.g. *Systems*,
By default a single chassis with be configured automatically.
Chassis will be revealed when querying the *Chassis* resource
directly, as well as other resources they manage or have some
other relations.
Chassis will be revealed when querying the *Chassis* resource directly, as well
as other resources they manage or have some other relations.
.. code-block:: bash
@ -515,14 +510,14 @@ other relations.
Indicator resource
------------------
*IndicatorLED* resource is emulated as a persistent emulator database
The *IndicatorLED* resource is emulated as a persistent emulator database
record, observable and manageable by a Redfish client.
By default, *Chassis* and *Systems* resources have emulated *IndicatorLED*
sub-resource attached and *Lit*.
By default, the *Chassis* and *Systems* resources have emulated *IndicatorLED*
sub-resources attached and *Lit*.
Non-default initial indicator state can optionally be configured
on a per-resource basis:
Non-default initial indicator state can optionally be configured on a
per-resource basis:
.. code-block:: python
@ -557,22 +552,21 @@ Redfish client can turn *IndicatorLED* into a different state:
Virtual media resource
----------------------
Virtual Media resource is emulated as a persistent emulator database
record, observable and manageable by a Redfish client.
The Virtual Media resource is emulated as a persistent emulator database record,
observable and manageable by a Redfish client.
By default, *VirtualMedia* resource includes two emulated removable
devices: *Cd* and *Floppy*. Each *Manager* resource gets its own collection
of virtual media devices as a *VirtualMedia* sub-resource.
By default, a *VirtualMedia* resource includes two emulated removable devices:
*Cd* and *Floppy*. Each *Manager* resource gets its own collection of virtual
media devices as a *VirtualMedia* sub-resource.
If currently used *Systems* resource emulation driver supports setting
boot image, *VirtualMedia* resource will apply inserted image onto
all the systems being managed by this manager. Setting system boot source
to *Cd* and boot mode to *Uefi* will cause the system to boot from
virtual media image.
If the currently used *Systems* resource emulation driver supports setting the
boot image, the *VirtualMedia* resource will apply the inserted image onto all
the systems being managed by this manager. Setting the system boot source to
*Cd* and boot mode to *Uefi* will cause the system to boot from the virtual
media image.
User can change virtual media devices and their properties through
emulator configuration (except for the OpenStack driver which only
supports *Cd*):
The user can change virtual media devices and their properties through emulator
configuration (except for the OpenStack driver which only supports *Cd*):
.. code-block:: python
@ -633,7 +627,7 @@ On insert the OpenStack driver will:
* Upload the image directly to glance from the URL (long running)
* Store the URL, image ID and volume ID in server metadata properties
`sushy-tools-image-url`, `sushy-tools-import-image`, `sushy-tools-volume`
* Create and attach a new volume the same size as the root disk
* Create and attach a new volume with the same size as the root disk
* Rebuild the server with the image, replacing the contents of the root disk
* Delete the image
@ -648,19 +642,21 @@ Redfish client can eject image from virtual media device:
On eject the OpenStack driver will:
* Assume the attached volume has been rewritten with a new image (an ISO installer or IPA)
* Detach the volume
* Create an image from the volume (long running)
* Store the volume image ID in server metadata property `sushy-tools-volume-image`
* Assume the attached Volume has been rewritten with a new image (an ISO
installer or IPA)
* Detach the Volume
* Create an image from the Volume (long running)
* Store the Volume image ID in server metadata property
`sushy-tools-volume-image`
* Rebuild the server with the new image
* Delete the volume
* Delete the Volume
* Delete the image
Virtual media boot
++++++++++++++++++
To boot a system from a virtual media device the client first needs to figure
out which manager is responsible for the system of interest:
To boot a system from a virtual media device, the client first needs to figure
out which Manager is responsible for the system of interest:
.. code-block:: bash
@ -691,7 +687,8 @@ being offered:
},
...
Knowing virtual media device name, the client can check out its present state:
Knowing the virtual media device name, the client can check out its present
state:
.. code-block:: bash
@ -710,9 +707,9 @@ Knowing virtual media device name, the client can check out its present state:
"WriteProtected": false,
...
Assuming `http://localhost/var/tmp/mini.iso` URL points to a bootable UEFI or
hybrid ISO, the following Redfish REST API call will insert the image into the
virtual CD drive:
Assuming that the `http://localhost/var/tmp/mini.iso` URL points to a bootable
UEFI or hybrid ISO, the following Redfish REST API call will insert the image
into the virtual CD drive:
.. code-block:: bash
@ -741,8 +738,7 @@ Querying again, the emulator should have it in the drive:
"WriteProtected": true,
...
Next, the node needs to be configured to boot from its local CD drive
over UEFI:
Next, the node needs to be configured to boot from its local CD drive over UEFI:
.. code-block:: bash
@ -758,8 +754,8 @@ over UEFI:
.. note::
With the OpenStack driver the boot source is changed during insert and eject, so setting
`BootSourceOverrideTarget` to `Cd` or `Hdd` has no effect.
With the OpenStack driver the boot source is changed during insert and eject,
so setting `BootSourceOverrideTarget` to `Cd` or `Hdd` has no effect.
By this point the system will boot off the virtual CD drive when powering it on:
@ -771,22 +767,20 @@ By this point the system will boot off the virtual CD drive when powering it on:
.. note::
ISO files to boot from must be UEFI-bootable, libvirtd should be running on the same
machine with sushy-emulator.
The ISO files to boot from must be UEFI-bootable. libvirtd should be running
on the same machine with sushy-emulator.
Storage resource
----------------
For emulating *Storage* resource for a System of choice, the
user can statically configure one or more imaginary storage
instances along with the corresponding storage controllers which
are also imaginary.
For emulating *Storage* resource for a System of choice, the user can statically
configure one or more imaginary storage instances along with the corresponding
storage controllers which are also imaginary.
The IDs of the imaginary drives associated to a *Storage* resource
can be provided as a list under *Drives*.
The IDs of the imaginary drives associated with a *Storage* resource can be
provided as a list under *Drives*.
The *Storage* instances are keyed by the UUIDs of the System they
belong to.
The *Storage* instances are keyed by the UUIDs of the System they belong to.
.. code-block:: python
@ -809,8 +803,8 @@ belong to.
]
}
The Storage resources can be revealed by querying Storage resource
for the corresponding System directly.
The Storage resources can be revealed by querying the Storage resource for the
corresponding System directly.
.. code-block:: bash
@ -832,13 +826,13 @@ for the corresponding System directly.
Drive resource
++++++++++++++
For emulating the *Drive* resource, the user can statically configure
one or more drives.
For emulating the *Drive* resource, the user can statically configure one or
more Drives.
The *Drive* instances are keyed in a composite manner using
(System_UUID, Storage_ID) where System_UUID is the UUID of the System
and Storage_ID is the ID of the Storage resource to which that particular
drive belongs.
(System_UUID, Storage_ID), where System_UUID is the UUID of the System and
Storage_ID is the ID of the Storage resource to which that particular Drive
belongs.
.. code-block:: python
@ -877,18 +871,17 @@ Storage resource it belongs to.
Storage Volume resource
+++++++++++++++++++++++
The *Volume* resource is emulated as a persistent emulator database
record, backed by the libvirt virtualization backend of the dynamic
Redfish emulator.
The *Volume* resource is emulated as a persistent emulator database record,
backed by the libvirt virtualization backend of the dynamic Redfish emulator.
Only the volumes specified in the config file or created via a POST request
are allowed to be emulated upon by the emulator and appear as libvirt volumes
in the libvirt virtualization backend. Volumes other than these can neither be
listed nor deleted.
Only the volumes specified in the config file or created via a POST request are
allowed to be emulated upon by the emulator and appear as libvirt volumes in the
libvirt virtualization backend. Volumes other than these can neither be listed
nor deleted.
To allow libvirt volumes to be emulated upon, they need to be specified
in the configuration file in the following format (keyed compositely by
the System UUID and the Storage ID):
To allow libvirt volumes to be emulated upon, they need to be specified in the
configuration file in the following format (keyed compositely by the System UUID
and the Storage ID):
.. code-block:: python
@ -913,8 +906,8 @@ the System UUID and the Storage ID):
]
}
The Volume resources can be revealed by querying Volumes resource
for the corresponding System and the Storage.
The Volume resources can be revealed by querying the Volumes resource for the
corresponding System and Storage.
.. code-block:: bash
@ -935,8 +928,8 @@ for the corresponding System and the Storage.
"@odata.id": "/redfish/v1/Systems/da69abcc-dae0-4913-9a7b-d344043097c0/Storage/1/Volumes",
}
A new volume can also be created in the libvirt backend via a POST request
on a Volume Collection:
A new volume can also be created in the libvirt backend via a POST request on a
Volume Collection:
.. code-block:: bash

View File

@ -4,9 +4,9 @@ Using Redfish emulators
The sushy-tools package includes two emulators - static and dynamic.
Static emulator could be used to serve Redfish mocks in form of static
JSON documents. Dynamic emulator relies upon `libvirt`, `OpenStack` or
`Ironic` virtualization backend to mimic nodes behind a Redfish BMC.
The static emulator can be used to serve Redfish mocks in the form of static
JSON documents. The dynamic emulator relies upon the `libvirt`, `OpenStack` or
`Ironic` virtualization backends to mimic nodes behind a Redfish BMC.
.. toctree::
:maxdepth: 2

View File

@ -2,13 +2,11 @@
Static Redfish BMC
==================
The static Redfish responder is a simple REST API server which serves
static contents down to the Redfish client. The tool emulates the
simple read-only BMC.
The static Redfish responder is a simple REST API server which serves static
contents down to the Redfish client. The tool emulates the simple read-only BMC.
The user is expected to supply the Redfish-compliant contents perhaps
downloaded from the `DMTF <https://www.dmtf.org/>`_ web site. For
example,
The user is expected to supply the Redfish-compliant contents, perhaps
downloaded from the `DMTF <https://www.dmtf.org/>`_ web site. For example,
`this .zip archive <https://www.dmtf.org/sites/default/files/standards/documents/DSP2043_1.0.0.zip>`_
includes Redfish content mocks for Redfish 1.0.0.
@ -19,8 +17,8 @@ includes Redfish content mocks for Redfish 1.0.0.
unzip -d mockups DSP2043_1.0.0.zip
sushy-static -m mockups/public-rackmount
Once you have the static emulator running, you can use it as it was a
read-only bare-metal controller listening at *localhost:8000* (by default):
Once you have the static emulator running, you can use it as if it was a
read-only bare metal controller listening at *localhost:8000* (by default):
.. code-block:: bash
@ -39,6 +37,6 @@ read-only bare-metal controller listening at *localhost:8000* (by default):
"@Redfish.Copyright": "Copyright 2014-2016 Distributed Management Task Force, Inc. (DMTF). For the full DMTF copyright policy, see http://www.dmtf.org/about/policies/copyright."
}
You can mock different Redfish versions as well as different bare-metal
machines by providing appropriate Redfish contents.
You can mock different Redfish versions as well as different bare metal
machines by providing the appropriate Redfish contents.

View File

@ -51,7 +51,7 @@ def update_service_simple_update():
name = target.rstrip("/").rsplit('/', 1)[-1]
uuid = flask.current_app.systems.uuid(name)
except error.AliasAccessError as exc:
api_utils.debug('Received a redirect in respose to GET System '
api_utils.debug('Received a redirect in response to GET System '
'"%s". New System ID: "%s"', name, exc)
uuid = str(exc)
except Exception as exc:

View File

@ -211,12 +211,12 @@ class FakeDriver(AbstractSystemsDriver):
# Check if the request was unsuccessful
if resp.status_code >= 400:
self._logger.error(
'External notifcation to (%s) about system %s request'
'External notification to (%s) about system %s request'
'error %d: %s',
external_notification_url, system.get('name'),
resp.status_code, resp.text)
return
# Log successful notification
self._logger.info("External notifcation to (%s) sent about %s",
self._logger.info("External notification to (%s) sent about %s",
external_notification_url, system.get('name'))