docs/doc/source/planning/openstack/ethernet-interface-configuration.rst
Ron Stone 3143d86b69 Openstack planning
Updates for patchset 2 review comments
Changed link depth of main Planning index and added some narrative guidance
Added planning/openstack as sibling of planning/kubernetes
Related additions to abbrevs.txt
Added max-workers substitution to accomodate StarlingX/vendor variants

Signed-off-by: Ron Stone <ronald.stone@windriver.com>
Change-Id: Ibff9af74ab3f2c00958eff0e33c91465f1dab6b4
Signed-off-by: Stone <ronald.stone@windriver.com>
2021-01-25 08:36:47 -05:00

3.3 KiB
Executable File

Ethernet Interface Configuration

You can review and modify the configuration for physical or virtual Ethernet interfaces using the OpenStack Horizon Web interface or the CLI.

Physical Ethernet Interfaces

The physical Ethernet interfaces on nodes are configured to use the following networks:

  • the internal management network
  • the internal cluster host network (by default sharing the same L2 interface as the internal management network)
  • the external network
  • one or more data networks

A single interface can optionally be configured to support more than one network using tagging (see Shared (VLAN or Multi-Netted) Ethernet Interfaces <network-planning-shared-vlan-or-multi-netted-ethernet-interfaces>).

Virtual Ethernet Interfaces

The virtual Ethernet interfaces for guest running on are defined when an instance is launched. They connect the to project networks, which are virtual networks defined over data networks, which in turn are abstractions associated with physical interfaces assigned to physical networks on the compute nodes.

The following virtual network interfaces are available:

  • ne2k_pci (NE2000 Emulation)
  • pcnet (AMD PCnet/ Emulation)
  • rtl8139 (Realtek 8139 Emulation)
  • virtio (VirtIO Network)
  • pci-passthrough ( Passthrough Device)
  • pci-sriov ( device)

Unmodified guests can use Linux networking and virtio drivers. This provides a mechanism to bring existing applications into the production environment immediately.

incorporates -Accelerated Neutron Virtual Router L3 Forwarding (AVR). Accelerated forwarding is used for directly attached project networks and subnets, as well as for gateway, and floating IP functionality.

also supports direct guest access to using passthrough or , with enhanced scheduling options compared to standard OpenStack. This offers very high performance, but because access is not managed by or the vSwitch process, there is no support for live migration, -provided , host interface monitoring, , or ACL. If are used, they must be managed by the guests.

For further performance improvements, supports direct access to -based hardware accelerators, such as the Coleto Creek encryption accelerator from Intel. manages the allocation of VFs to , and provides intelligent scheduling to optimize node affinity.

partner