ironic/doc/source/admin/dhcp-less.rst
Dmitry Tantsur 133dac255f Allow overriding an external URL for virtual media
Virtual media deployments can be conducted outside of the provisioning
network as long as the node gets an IP address somehow and can reach
ironic and its HTTP server. This changes adds new configuration that
allows to use public IP addresses for virtual media while keeping PXE
boots working and constrained to the provisioning network.

Change-Id: I8b859b2812160ff3911eb7d648eab835ef61d934
Story: #2008566
Task: #41706
2021-03-24 16:53:56 +00:00

4.0 KiB

Layer 3 or DHCP-less ramdisk booting

Booting nodes via PXE, while universally supported, suffers from one disadvantage: it requires a direct L2 connectivity between the node and the control plane for DHCP. Using virtual media it is possible to avoid not only the unreliable TFTP protocol, but DHCP altogether.

When network data is provided for a node as explained below, the generated virtual media ISO will also serve as a configdrive, and the network data will be stored in the standard OpenStack location.

The simple-init element needs to be used when creating the deployment ramdisk. The Glean tool will look for a media labeled as config-2. If found, the network information from it will be read, and the node's networking stack will be configured accordingly.

ironic-python-agent-builder -o /output/ramdisk \
     debian-minimal -e simple-init

Warning

The simple-init element is found to conflict to NetworkManager, which makes this feature not operational with ramdisks based on CentOS, RHEL and Fedora. The debian-minimal and centos elements seem to work correctly. For CentOS, only CentOS 7 based ramdisks are known to work.

Note

If desired, some interfaces can still be configured to use DHCP.

Hardware type support

This feature is known to work with the following hardware types:

  • Redfish </admin/drivers/redfish> with redfish-virtual-media boot
  • iLO </admin/drivers/ilo> with ilo-virtual-media boot

Configuring network data

When the Bare Metal service is running within OpenStack, no additional configuration is required - the network configuration will be fetched from the Network service.

Alternatively, the user can build and pass network configuration in form of a network_data JSON to a node via the network_data field. Node-based configuration takes precedence over the configuration generated by the Network service and also works in standalone mode.

baremetal node set --network-data ~/network_data.json <node>

An example network data:

{
    "links": [
        {
            "id": "port-92750f6c-60a9-4897-9cd1-090c5f361e18",
            "type": "phy",
            "ethernet_mac_address": "52:54:00:d3:6a:71"
        }
    ],
    "networks": [
        {
            "id": "network0",
            "type": "ipv4",
            "link": "port-92750f6c-60a9-4897-9cd1-090c5f361e18",
            "ip_address": "192.168.122.42",
            "netmask": "255.255.255.0",
            "network_id": "network0",
            "routes": []
        }
    ],
    "services": []
}

Note

Some fields are redundant with the port information. We're looking into simplifying the format, but currently all these fields are mandatory.

Deploying outside of the provisioning network

If you need to combine traditional deployments using a provisioning network with virtual media deployments over L3, you may need to provide an alternative IP address for the remote nodes to connect to:

[deploy]
http_url = <HTTP server URL internal to the provisioning network>
external_http_url = <HTTP server URL with a routable IP address>

You may also need to override the callback URL, which is normally fetched from the service catalog or configured in the [service_catalog] section:

[deploy]
external_callback_url = <Bare Metal API URL with a routable IP address>