This commit adds documentation about driver_info[external_http_url] Follow-up If6a117a756b7d2a04251792f88c2ee412a040b28 Change-Id: Ia4787c27ed4c53f4ecb911eb0f9d77ea455c25f3
4.8 KiB
Layer 3 or DHCP-less ramdisk booting
Booting nodes via PXE, while universally supported, suffers from one disadvantage: it requires a direct L2 connectivity between the node and the control plane for DHCP. Using virtual media it is possible to avoid not only the unreliable TFTP protocol, but DHCP altogether.
When network data is provided for a node as explained below, the generated virtual media ISO will also serve as a configdrive, and the network data will be stored in the standard OpenStack location.
The simple-init
element needs to be used when creating the deployment ramdisk. The Glean tool will look
for a media labeled as config-2
. If found, the network
information from it will be read, and the node's networking stack will
be configured accordingly.
ironic-python-agent-builder -o /output/ramdisk \
debian-minimal -e simple-init
Warning
Ramdisks based on distributions with NetworkManager require Glean 1.19.0 or newer to work.
Note
If desired, some interfaces can still be configured to use DHCP.
Hardware type support
This feature is known to work with the following hardware types:
Redfish </admin/drivers/redfish>
withredfish-virtual-media
bootiLO </admin/drivers/ilo>
withilo-virtual-media
boot
Configuring network data
When the Bare Metal service is running within OpenStack, no additional configuration is required - the network configuration will be fetched from the Network service.
Alternatively, the user can build and pass network configuration in
form of a network_data
JSON to a node via the network_data
field. Node-based
configuration takes precedence over the configuration generated by the
Network service and also works in standalone mode.
baremetal node set --network-data ~/network_data.json <node>
An example network data:
{
"links": [
{
"id": "port-92750f6c-60a9-4897-9cd1-090c5f361e18",
"type": "phy",
"ethernet_mac_address": "52:54:00:d3:6a:71"
}
],
"networks": [
{
"id": "network0",
"type": "ipv4",
"link": "port-92750f6c-60a9-4897-9cd1-090c5f361e18",
"ip_address": "192.168.122.42",
"netmask": "255.255.255.0",
"network_id": "network0",
"routes": []
}
],
"services": []
}
Note
Some fields are redundant with the port information. We're looking into simplifying the format, but currently all these fields are mandatory.
You'll need the deployed image to support network data, e.g. by pre-installing cloud-init or Glean on it (most cloud images have the former). Then you can provide the network data when deploying, for example:
baremetal node deploy <node> \
--config-drive "{\"network_data\": $(cat ~/network_data.json)}"
Some first-boot services, such as Ignition, don't support network data. You can provide their configuration as part of user data instead:
baremetal node deploy <node> \
--config-drive "{\"user_data\": \"... ignition config ...\"}"
Deploying outside of the provisioning network
If you need to combine traditional deployments using a provisioning network with virtual media deployments over L3, you may need to provide an alternative IP address for the remote nodes to connect to:
[deploy]
http_url = <HTTP server URL internal to the provisioning network>
external_http_url = <HTTP server URL with a routable IP address>
You may also need to override the callback URL, which is normally
fetched from the service catalog or configured in the
[service_catalog]
section:
[deploy]
external_callback_url = <Bare Metal API URL with a routable IP address>
In case you need specific URLs for each node, you can use the
driver_info[external_http_url]
node property. When used it
overrides the [deploy]http_url
and
[deploy]external_http_url
settings in the configuration
file.
baremetal node set node-0 \
--driver-info external_http_url="<your_node_external_url>"