Documentation for graphical consoles

This change cleans up and elaborates on existing graphical console
documentation, and also adds an overview document describing how it all
works together.

Closes-Bug: 2086715
Change-Id: I16b7ffb993e1ca5148b5205f0a35a74db85337d5
This commit is contained in:
Steve Baker 2025-03-02 22:18:42 +00:00
parent 644fe20576
commit 25a3dd076a
6 changed files with 197 additions and 14 deletions

View File

@ -0,0 +1,128 @@
.. _graphical-console:
Graphical console support
=========================
The Bare Metal service supports displaying graphical consoles from a number of
hardware vendors.
The following preconditions are required for a node's graphical console to be
viewable:
* Service ironic-conductor has a configured console container provider
appropriate for the environment
* Service ironic-novncproxy is configured and running
* The node's ``console_interface`` is set to a graphical driver such as
``redfish-graphical``
When enabled and configured, the following sequence occurs when a graphical
console is accessed when interacting with Bare Metal service directly:
* A REST API call is made to enable the console, for example via the CLI
command ``baremetal node console enable``
* ironic-conductor creates and stores a time-limited token with the node
* ironic-conductor triggers starting a container which runs a virtual X11
display, starts a web browser, and exposes a VNC server
* Once enabled, a REST API call is made to fetch the console URL, for example
via the CLI command ``baremetal node console show``
* The user accesses the console URL with a web browser
* ironic-novncproxy serves the NoVNC web assets to the browser
* A websocket is initiated with ironic-novncproxy, which looks up the node and
validates the token
* ironic-novncproxy makes a VNC connection with the console container and
proxies VNC traffic between the container and the browser
* The container initiates a connection with the node's BMC Redfish endpoint
and determines which vendor script to run
* The container makes Redfish calls and simulates a browser user to display
an HTML5 console, which the end user can now view
Building a console container
----------------------------
The `tools/vnc-container
<https://opendev.org/openstack/ironic/src/branch/master/tools/vnc-container>`_
directory contains the files and instructions to build a console container.
This directory will be where further development occurs, and currently only a
CentOS Stream based image can be built.
Container providers
-------------------
ironic-conductor must be configured with a container provider so that it can
trigger starting and stopping console containers based on node's console
enabled state. Given the variety of deployment architectures for Ironic, an
appropriate container provider needs to be configured.
In many cases this will require writing an external custom container provider,
especially when Ironic itself is deployed in a containerized environment.
Systemd container provider
~~~~~~~~~~~~~~~~~~~~~~~~~~
The only functional container provider included is the systemd provider which
manages containers as Systemd Quadlet containers. This provider is appropriate
to use when the Ironic services themselves are not containerised, and is also
a good match when ironic-conductor itself is managed as a Systemd unit.
To start a container, this provider writes ``.container`` files to
``/etc/containers/systemd/users/{uid}/containers/systemd`` then calls
``systemctl --user daemon-reload`` to generate a unit file which is then
started with ``systemctl --user start {unit name}``.
Creating an external container provider
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
An external python library can contribute its own container provider by
subclassing ``ironic.console.container.base.BaseConsoleContainer`` then adding
it to the library's ``setup.cfg`` ``[entry_points]ironic.console.container``.
The ``start_container`` method must return the IP and port of the resulting
running VNC server, which in most scenarios would mean blocking until the
container is running.
Networking requirements
-----------------------
ironic-novncproxy
~~~~~~~~~~~~~~~~~
Like ironic-api, ironic-novncproxy presents a public endpoint. However unlike
ironic-api, node console URLs are coupled to the ironic-conductor managing
that node, so load balancing across all ironic-novncproxy instances is not
appropriate.
A TLS enabled reverse proxy needs to support WebSockets, otherwise TLS can be
enabled in the ``ironic.conf`` ``[vnc]`` section.
ironic-novncproxy needs to be able to connect to the VNC servers exposed by
the console containers.
Console containers
~~~~~~~~~~~~~~~~~~
The VNC servers exposed by console containers are unencrypted and
unauthenticated, so public access *must* be restricted via another network
configuration mechanism. The ironic-novncproxy service needs to access the VNC
server exposed by these containers, and so does nova-novncproxy when Nova is
using the Ironic driver.
For the ``systemd`` container the VNC server will be published on a random
high port number.
Console containers need access to the management network to access the BMC web
interface. If driver_info ``redfish_verify_ca=False`` then web requests will
not be verified by the browser. Setting ``redfish_verify_ca`` to a certificate
path is not yet supported by the ``systemd`` container provider as the
certificate is not bind-mounted into the container. This can be supported
locally by building a container which includes the expected certificate files.

View File

@ -210,6 +210,56 @@ Configuring ironic-conductor service
#. Configure the network for ironic-conductor service to perform node
cleaning, see :ref:`cleaning` from the admin guide.
#. If ironic-novncproxy is enabled, ironic-conductor must be configured to
build valid console URLs and it also needs to be configured with a console
container provider. Each enabled console has a corresponding running
container which runs a headless X11 session, connects to the graphical
console of the BMC, and exposes a VNC server for ironic-novncproxy to
connect to.
Replace ``PUBLIC_IP`` and ``PUBLIC_PORT`` with appropriate values:
.. code-block:: ini
[vnc]
# Base url used to build browser links to graphical consoles. If a
# reverse proxy is used the protocol, IP, and port needs to match how
# users will access the service. When there is no reverse proxy
# ``PUBLIC_IP`` should match ``[vnc]host_ip`` and ``PUBLIC_PORT`` should
# match ``[vnc]port``
public_url=http://PUBLIC_IP:PUBLIC_PORT/vnc_lite.html
# The only functional container provider included is the systemd
# provider which manages containers as Systemd Quadlet containers. This
# provider is appropriate to use when the Ironic services themselves are
# not containerised, otherwise a custom external provider may be
# required
container_provider=systemd
# For the ``container_provider=systemd``, set a valid container image
# reference available to the podman image storage of the user running
# ironic-conductor. See /usr/share/ironic/vnc-container for instructions
# to build a compatible image.
console_image=localhost/vnc-container
When ``[vnc]container_provider=systemd`` then the
``openstack-ironic-conductor`` service needs to be able to make ``systemctl
--user`` calls for the user which both ironic-conductor and
ironic-novncproxy run as.
Assuming the services are running as user ``ironic``, discover the <UID>
for that user by running ``id -u ironic``. Edit
``openstack-ironic-conductor.service`` to add environment variable
``DBUS_SESSION_BUS_ADDRESS``, substituting the <UID>
.. code-block:: ini
[Service]
...
Environment = DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/<UID>/bus
#. Restart the ironic-conductor service:
RHEL/CentOS/SUSE::

View File

@ -34,17 +34,6 @@ Configuring ironic-novncproxy service
# IP address to bind to for serving NoVNC web assets and websockets
host_ip=PUBLIC_IP
# Base url used to build browser links to graphical consoles. If a load balancer or reverse
# proxy is used the protocol, IP, and port needs to match how users will access the service
public_url=http://PUBLIC_IP:6090/vnc_auto.html
# The only functional container provider included is the systemd provider which manages
# containers as Systemd Quadlet containers. This provider is appropriate to use when the
# Ironic services themselves are not containerised, otherwise a custom external provider
# may be required
container_provider=systemd
#. Restart the ironic-novncproxy service:
RHEL/CentOS/SUSE::

View File

@ -21,6 +21,7 @@ It contains the following sections:
enrollment.rst
standalone.rst
configdrive.rst
graphical-console.rst
advanced.rst
troubleshooting.rst
next-steps.rst

View File

@ -163,11 +163,13 @@ The following components of the Bare Metal service are installed on a
* *bare metal* for contacting deployment, cleaning or rescue ramdisks
* The ``ironic-novncproxy`` NoVNC proxy is run directly as a web server
process. Typically, a load balancer, such as HAProxy, spreads the load
between the NoVNC instances on the *controllers*.
process and only serves consoles for bare metal nodes for the adjacent
``ironic-controller``. This means one ``ironic-novncproxy`` needs to be
run for each ``ironic-conductor`` process
The NoVNC proxy has to be served on the *control plane network*. Additionally,
it has to be exposed to the *management network* to access BMC graphical consoles.
it needs to make VNC connections to the console containers initiated by
``ironic-conductor``.
* TFTP and HTTP service for booting the nodes. Each ``ironic-conductor``
process has to have a matching TFTP and HTTP service. They should be exposed

View File

@ -92,6 +92,19 @@ You should make the following changes to ``/etc/ironic/ironic.conf``:
username = myName
password = myPassword
#. To make graphical consoles available for local viewing, set the following,
including an appropriate container image reference for console_image.
.. code-block:: ini
[vnc]
enabled=True
port=6090
host_ip=127.0.0.1
public_url=http://127.0.0.1:6090/vnc_lite.html
container_provider=systemd
console_image=<image reference>
#. Starting with the Yoga release series, you can use a combined
API+conductor+novncproxy service and completely disable the RPC. Set