Update documentation for LXC/metal and LXB/OVS/OVN

This patch updates documentation to reflect multiple deployment
scenarios including LXC vs Metal and LXB, OVS, and OVN.

Change-Id: I066e0f7ba24c72eba9e9d7b74e3aaf6f8b7a7604
This commit is contained in:
James Denton 2022-12-13 17:36:03 -06:00 committed by Dmitriy Rabotyagov
parent d9d01aa204
commit 03a8ac38f4
21 changed files with 293 additions and 50 deletions

View File

@ -3,12 +3,14 @@
Container networking
====================
OpenStack-Ansible deploys Linux containers (LXC) and uses Linux
bridging between the container and the host interfaces to ensure that
all traffic from containers flows over multiple host interfaces. This appendix
describes how the interfaces are connected and how traffic flows.
OpenStack-Ansible deploys Linux containers (LXC) and uses Linux or Open
vSwitch-based bridging between the container and the host interfaces to ensure
that all traffic from containers flows over multiple host interfaces. All
services in this deployment model use a *unique* IP address.
For more information about how the OpenStack Networking service (neutron) uses
This appendix describes how the interfaces are connected and how traffic flows.
For more information about how the OpenStack Networking service (Neutron) uses
the interfaces for instance traffic, please see the
`OpenStack Networking Guide`_.
@ -24,26 +26,28 @@ In a typical production environment, physical network interfaces are combined
in bonded pairs for better redundancy and throughput. Avoid using two ports on
the same multiport network card for the same bonded interface, because a
network card failure affects both of the physical network interfaces used by
the bond.
the bond. Single (bonded) interfaces are also a supported configuration, but
will require the use of VLAN subinterfaces.
Linux bridges/switches
Linux Bridges/Switches
~~~~~~~~~~~~~~~~~~~~~~
The combination of containers and flexible deployment options requires
implementation of advanced Linux networking features, such as bridges,
switches and namespaces.
switches, and namespaces.
* Bridges/switches provide layer 2 connectivity (similar to switches) among
* Bridges provide layer 2 connectivity (similar to physical switches) among
physical, logical, and virtual network interfaces within a host. After
a bridge/switch is created, the network interfaces are virtually plugged
in to it.
OpenStack-Ansible can use linux bridges or openvswitches to connect
physical and logical network interfaces on the host to virtual network
interfaces within containers.
OpenStack-Ansible uses Linux bridges for control plane connections to LXC
containers, and can use Linux bridges or Open vSwitch-based bridges for
data plane connections that connect virtual machine instances to the
physical network infrastructure.
* Namespaces provide logically separate layer 3 environments (similar to
routers) within a host. Namespaces use virtual interfaces to connect
* Network namespaces provide logically separate layer 3 environments (similar
to VRFs) within a host. Namespaces use virtual interfaces to connect
with other namespaces, including the host namespace. These interfaces,
often called ``veth`` pairs, are virtually plugged in between
namespaces similar to patch cables connecting physical devices such as
@ -56,7 +60,8 @@ switches and namespaces.
The following image demonstrates how the container network interfaces are
connected to the host's bridges and physical network interfaces:
.. image:: ../figures/networkcomponents.png
.. image:: ../figures/networkcomponents.drawio.png
:align: center
Network diagrams
~~~~~~~~~~~~~~~~
@ -67,10 +72,12 @@ Hosts with services running in containers
The following diagram shows how all of the interfaces and bridges interconnect
to provide network connectivity to the OpenStack deployment:
.. image:: ../figures/networkarch-container-external.png
.. image:: ../figures/networkarch-container-external.drawio.png
:align: center
The interface ``lxcbr0`` provides connectivity for the containers to the
outside world, thanks to dnsmasq (dhcp/dns) + NAT.
The bridge ``lxcbr0`` is configured automatically and provides
connectivity for the containers (via eth0) to the outside world, thanks to
dnsmasq (dhcp/dns) + NAT.
.. note::
@ -79,44 +86,111 @@ outside world, thanks to dnsmasq (dhcp/dns) + NAT.
please adapt your ``openstack_user_config.yml`` file.
See :ref:`openstack-user-config-reference` for more details.
Services running "on metal" (deploying directly on the physical hosts)
----------------------------------------------------------------------
OpenStack-Ansible deploys the Compute service on the physical host rather than
in a container. The following diagram shows how to use bridges for
network connectivity:
.. image:: ../figures/networkarch-bare-external.png
Neutron traffic
---------------
The following diagram shows how the Networking service (neutron) agents
work with the ``br-vlan`` and ``br-vxlan`` bridges. Neutron is configured to
use a DHCP agent, an L3 agent, and a Linux Bridge agent within a
networking-agents container. The diagram shows how DHCP agents provide
information (IP addresses and DNS servers) to the instances, and how routing
works on the image.
Common reference drivers, including ML2/LXB, ML2/OVS, and ML2/OVN, and their
respective agents, are responsible for managing the virtual networking
infrastructure on each node. OpenStack-Ansible refers to Neutron traffic
as "data plane" traffic, and can consist of flat, vlan, or overlay technologies
such as VXLAN and GENEVE.
.. image:: ../figures/networking-neutronagents.png
Neutron agents can be deployed across a variety of hosts, but are typically
limited to dedicated network hosts or infrastructure hosts (controller nodes).
Neutron agents are deployed "on metal" and not within an LXC container. Neutron
typically requires the operator to define "provider bridge mappings", which map
a provider network name to a physical interface. These provider bridge mappings
provide flexibility and abstract physical interface names when creating provider
networks.
The following diagram shows how virtual machines connect to the ``br-vlan`` and
``br-vxlan`` bridges and send traffic to the network outside the host:
LinuxBridge Example:
.. image:: ../figures/networking-compute.png
.. code-block:: ini
When Neutron agents are deployed "on metal" on a network node or collapsed
infra/network node, the ``Neutron Agents`` container and respective virtual
interfaces are no longer implemented. In addition, use of the
``host_bind_override`` override when defining provider networks allows
Neutron to interface directly with a physical interface or bond instead of the
``br-vlan`` bridge. The following diagram reflects the differences in the
virtual network layout.
bridge_mappings = physnet1:bond1
.. image:: ../figures/networking-neutronagents-nobridge.png
Open vSwitch/OVN Example:
The absence of ``br-vlan`` in-path of instance traffic is also reflected on
compute nodes, as shown in the following diagram.
.. code-block:: ini
.. image:: ../figures/networking-compute-nobridge.png
bridge_mappings = physnet1:br-ex
OpenStack-Ansible provides two overrides when defining provider networks that
can be used for creating the mappings and in some cases, connecting the physical
interfaces to provider bridges:
- ``host_bind_override``
- ``network_interface``
The ``host_bind_override`` override is used for LinuxBridge-based deployments,
and requires a physical interface name which will then be used by the
LinuxBridge agent for flat and vlan-based provider and tenant network traffic.
The ``network_interface`` override is used for Open vSwitch and OVN-based deployments,
and requires a physical interface name which will be connected to the provider bridge
(ie. br-ex) for flat and vlan-based provider and tenant network traffic.
.. note::
Previous versions of OpenStack-Ansible utilized a bridge named ``br-vlan`` for
flat and vlan-based provider and tenant network traffic. The ``br-vlan`` bridge
is a leftover of containerized Neutron agents and is no longer useful or
recommended.
The following diagrams reflect the differences in the virtual network layout for
supported network architectures.
LinuxBridge
...........
.. note::
The ML2/LinuxBridge (LXB) mechanism driver is marked as "experimental"
as of the Zed release of OpenStack-Ansible.
Networking Node
***************
.. image:: ../figures/networking-linuxbridge-nn.drawio.png
:align: center
Compute Node
************
.. image:: ../figures/networking-linuxbridge-cn.drawio.png
:align: center
Open vSwitch (OVS)
..................
Networking Node
***************
.. image:: ../figures/networking-openvswitch-nn.drawio.png
:align: center
Compute Node
************
.. image:: ../figures/networking-openvswitch-cn.drawio.png
:align: center
Open Virtual Network (OVN)
..........................
.. note::
The ML2/OVN (LXB) mechanism driver is deployed by default
as of the Zed release of OpenStack-Ansible.
Networking Node
***************
.. image:: ../figures/networking-ovn-nn.drawio.png
:align: center
Compute Node
************
.. image:: ../figures/networking-ovn-cn.drawio.png
:align: center

View File

@ -16,4 +16,4 @@ was architected in this way.
service-arch.rst
storage-arch.rst
container-networking.rst
metal-networking.rst

View File

@ -0,0 +1,169 @@
.. _metal-networking:
Metal networking
====================
OpenStack-Ansible supports deploying OpenStack and related services on "metal"
as well as inside LXC containers. Python virtual environments (venvs) provide
OpenStack service and Python library segregation, while other services such
as Galera and RabbitMQ are co-located on the host. All services in this
deployment model share the *same* IP address.
This appendix describes how the interfaces are connected and how traffic flows.
For more information about how the OpenStack Networking service (Neutron) uses
the interfaces for instance traffic, please see the
`OpenStack Networking Guide`_.
.. _OpenStack Networking Guide: https://docs.openstack.org/neutron/latest/admin/index.html
For details on the configuration of networking for your
environment, please have a look at :ref:`openstack-user-config-reference`.
Physical host interfaces
~~~~~~~~~~~~~~~~~~~~~~~~
In a typical production environment, physical network interfaces are combined
in bonded pairs for better redundancy and throughput. Avoid using two ports on
the same multiport network card for the same bonded interface, because a
network card failure affects both of the physical network interfaces used by
the bond. Multiple bonded interfaces (ie. bond0, bond1) can be used to
segregate traffic, if desired. Single (bonded) interfaces are also a supported
configuration, but will require the use of VLAN subinterfaces.
Linux Bridges/Switches
~~~~~~~~~~~~~~~~~~~~~~
The combination of containers and flexible deployment options requires
implementation of advanced Linux networking features, such as bridges,
switches, and namespaces.
* Bridges provide layer 2 connectivity (similar to switches) among
physical, logical, and virtual network interfaces within a host. After
a bridge/switch is created, the network interfaces are virtually plugged
in to it.
OpenStack-Ansible uses Linux bridges for control plane connections to LXC
containers, and can use Linux bridges or Open vSwitch-based bridges for
data plane connections that connect virtual machine instances to the
physical network infrastructure.
* Network namespaces provide logically separate layer 3 environments (similar
to VRFs) within a host. Namespaces use virtual interfaces to connect
with other namespaces, including the host namespace. These interfaces,
often called ``veth`` pairs, are virtually plugged in between
namespaces similar to patch cables connecting physical devices such as
switches and routers.
Network diagrams
~~~~~~~~~~~~~~~~
Hosts with services running on metal
------------------------------------
The following diagram shows how all of the interfaces and bridges interconnect
to provide network connectivity to the OpenStack deployment:
.. image:: ../figures/networkarch-metal-external.drawio.png
Neutron traffic
---------------
Common reference drivers, including ML2/LXB, ML2/OVS, and ML2/OVN, and their
respective agents, are responsible for managing the virtual networking
infrastructure on each node. OpenStack-Ansible refers to Neutron traffic
as "data plane" traffic, and can consist of flat, vlan, or overlay technologies
such as VXLAN and GENEVE.
Neutron agents can be deployed across a variety of hosts, but are typically
limited to dedicated network hosts or infrastructure hosts (controller nodes).
Neutron agents are deployed "on metal" and not within an LXC container. Neutron
typically requires the operator to define "provider bridge mappings", which map
a provider network name to a physical interface. These provider bridge mappings
provide flexibility and abstract physical interface names when creating provider
networks.
**LinuxBridge Example**:
.. code-block:: ini
bridge_mappings = physnet1:bond1
**Open vSwitch/OVN Example**:
.. code-block:: ini
bridge_mappings = physnet1:br-ex
OpenStack-Ansible provides two overrides when defining provider networks that
can be used for creating the mappings and in some cases, connecting the physical
interfaces to provider bridges:
- ``host_bind_override``
- ``network_interface``
The ``host_bind_override`` override is used for LinuxBridge-based deployments,
and requires a physical interface name which will then be used by the
LinuxBridge agent for flat and vlan-based provider and tenant network traffic.
The ``network_interface`` override is used for Open vSwitch and OVN-based deployments,
and requires a physical interface name which will be connected to the provider bridge
(ie. br-ex) for flat and vlan-based provider and tenant network traffic.
The following diagrams reflect the differences in the virtual network layout for
supported network architectures.
LinuxBridge
...........
.. note::
The ML2/LinuxBridge (LXB) mechanism driver is marked as "experimental"
as of the Zed release of OpenStack-Ansible.
Networking Node
***************
.. image:: ../figures/networking-linuxbridge-nn.drawio.png
:align: center
Compute Node
************
.. image:: ../figures/networking-linuxbridge-cn.drawio.png
:align: center
Open vSwitch (OVS)
..................
Networking Node
***************
.. image:: ../figures/networking-openvswitch-nn.drawio.png
:align: center
Compute Node
************
.. image:: ../figures/networking-openvswitch-cn.drawio.png
:align: center
Open Virtual Network (OVN)
..........................
.. note::
The ML2/OVN (LXB) mechanism driver is deployed by default
as of the Zed release of OpenStack-Ansible.
Networking Node
***************
.. image:: ../figures/networking-ovn-nn.drawio.png
:align: center
Compute Node
************
.. image:: ../figures/networking-ovn-cn.drawio.png
:align: center

Binary file not shown.

Before

Width:  |  Height:  |  Size: 104 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 107 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 174 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 109 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 180 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 63 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 37 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 182 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 114 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 88 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 118 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 216 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 134 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 81 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 110 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 86 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 72 KiB