Merge "Cleaning up networking documentation"

This commit is contained in:
Jenkins 2016-01-20 17:16:08 +00:00 committed by Gerrit Code Review
commit 9923612370
5 changed files with 148 additions and 182 deletions

View File

@ -1,5 +1,7 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide `Home <index.html>`_ OpenStack-Ansible Installation Guide
.. _network_configuration:
Configuring target host networking Configuring target host networking
---------------------------------- ----------------------------------

View File

@ -1,123 +0,0 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
Host networking
---------------
The combination of containers and flexible deployment options requires
implementation of advanced Linux networking features such as bridges and
namespaces.
*Bridges* provide layer 2 connectivity (similar to switches) among
physical, logical, and virtual network interfaces within a host. After
creating a bridge, the network interfaces are virtually "plugged in" to
it.
OSA uses bridges to connect physical and logical network interfaces
on the host to virtual network interfaces within containers.
*Namespaces* provide logically separate layer 3 environments (similar to
routers) within a host. Namespaces use virtual interfaces to connect
with other namespaces including the host namespace. These interfaces,
often called ``veth`` pairs, are virtually "plugged in" between
namespaces similar to patch cables connecting physical devices such as
switches and routers.
Each container has a namespace that connects to the host namespace with
one or more ``veth`` pairs. Unless specified, the system generates
random names for ``veth`` pairs.
The relationship between physical interfaces, logical interfaces,
bridges, and virtual interfaces within containers is shown in
`Figure 1.2, "Network
components" <overview-hostnetworking.html#fig_overview_networkcomponents>`_.
 
**Figure 1.2. Network components**
.. image:: figures/networkcomponents.png
Target hosts can contain the following network bridges:
- LXC internal ``lxcbr0``:
- Mandatory (automatic).
- Provides external (typically internet) connectivity to containers.
- Automatically created and managed by LXC. Does not directly attach
to any physical or logical interfaces on the host because iptables
handle connectivity. Attaches to ``eth0`` in each container.
- Container management ``br-mgmt``:
- Mandatory.
- Provides management of and communication among infrastructure and
OpenStack services.
- Manually created and attaches to a physical or logical interface,
typically a ``bond0`` VLAN subinterface. Also attaches to ``eth1``
in each container.
- Storage ``br-storage``:
- Optional.
- Provides segregated access to block storage devices between
Compute and Block Storage hosts.
- Manually created and attaches to a physical or logical interface,
typically a ``bond0`` VLAN subinterface. Also attaches to ``eth2``
in each associated container.
- OpenStack Networking tunnel/overlay ``br-vxlan``:
- Mandatory.
- Provides infrastructure for VXLAN tunnel/overlay networks.
- Manually created and attaches to a physical or logical interface,
typically a ``bond1`` VLAN subinterface. Also attaches to
``eth10`` in each associated container.
- OpenStack Networking provider ``br-vlan``:
- Mandatory.
- Provides infrastructure for VLAN networks.
- Manually created and attaches to a physical or logical interface,
typically ``bond1``. Attaches to ``eth11`` for vlan type networks
in each associated container. It does not contain an IP address because
it only handles layer 2 connectivity. This interface can support flat
networks as well, though additional bridge configuration will be needed.
See more on `network configuration here <configure-networking.html>`_
`Figure 1.3, "Container network
architecture" <overview-hostnetworking.html#fig_overview_networkarch-container>`_
provides a visual representation of network components for services in
containers.
**Figure 1.3. Container network architecture**
.. image:: figures/networkarch-container-external.png
By default, OSA installs the Compute service in a bare metal
environment rather than within a container. `Figure 1.4, "Bare/Metal
network
architecture" <overview-hostnetworking.html#fig_overview_networkarch-bare>`_
provides a visual representation of the unique layout of network
components on a Compute host.
 
**Figure 1.4. Bare/Metal network architecture**
.. image:: figures/networkarch-bare-external.png
--------------
.. include:: navigation.txt

View File

@ -1,32 +0,0 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
OpenStack Networking
--------------------
OpenStack Networking (neutron) is configured to use a DHCP agent, L3
Agent and Linux Bridge agent within a networking agents container.
`Figure 1.5, "Networking agents
containers" <overview-neutron.html#fig_overview_neutron-agents>`_
shows the interaction of these agents, network components, and
connection to a physical network.
 
**Figure 1.5. Networking agents containers**
.. image:: figures/networking-neutronagents.png
The Compute service uses the KVM hypervisor. `Figure 1.6, "Compute
hosts" <overview-neutron.html#fig_overview_neutron-compute>`_ shows
the interaction of instances, Linux Bridge agent, network components,
and connection to a physical network.
 
**Figure 1.6. Compute hosts**
.. image:: figures/networking-compute.png
--------------
.. include:: navigation.txt

View File

@ -7,8 +7,6 @@ Chapter 1. Overview
overview-osa.rst overview-osa.rst
overview-hostlayout.rst overview-hostlayout.rst
overview-hostnetworking.rst
overview-neutron.rst
overview-requirements.rst overview-requirements.rst
overview-workflow.rst overview-workflow.rst
overview-security.rst overview-security.rst

View File

@ -3,6 +3,12 @@
Configuring the network Configuring the network
----------------------- -----------------------
This documentation section describes a recommended reference architecture.
Some components are mandatory, such as the bridges described below. Other
components aren't required but are strongly recommended, such as the bonded
network interfaces. Deployers are strongly urged to follow the reference
design as closely as possible for production deployments.
Although Ansible automates most deployment operations, networking on Although Ansible automates most deployment operations, networking on
target hosts requires manual configuration because it can vary target hosts requires manual configuration because it can vary
dramatically per environment. For demonstration purposes, these dramatically per environment. For demonstration purposes, these
@ -10,39 +16,154 @@ instructions use a reference architecture with example network interface
names, networks, and IP addresses. Modify these values as needed for the names, networks, and IP addresses. Modify these values as needed for the
particular environment. particular environment.
The reference architecture for target hosts contains the following Bonded network interfaces
mandatory components: ~~~~~~~~~~~~~~~~~~~~~~~~~
- A ``bond0`` interface using two physical interfaces. For redundancy The reference architecture includes bonded network interfaces, which
purposes, avoid using more than one port on network interface cards use multiple physical network interfaces for better redundancy and throughput.
containing multiple ports. The example configuration uses ``eth0`` Avoid using two ports on the same multi-port network card for the same bonded
and ``eth2``. Actual interface names can vary depending on hardware interface since a network card failure would affect both physical network
and drivers. Configure the ``bond0`` interface with a static IP interfaces used by the bond.
address on the host management network.
- A ``bond1`` interface using two physical interfaces. For redundancy The ``bond0`` interface will carry the traffic from the containers that
purposes, avoid using more than one port on network interface cards run the OpenStack infrastructure. Configure a static IP address on the
containing multiple ports. The example configuration uses ``eth1`` ``bond0`` interface from your management network.
and ``eth3``. Actual interface names can vary depending on hardware
and drivers. Configure the ``bond1`` interface without an IP address.
- Container management network subinterface on the ``bond0`` interface The ``bond1`` interface will carry the traffic from your virtual machines.
and ``br-mgmt`` bridge with a static IP address. Don't configure a static IP on this interface since this bond will be used by
neutron to handle VLAN and VXLAN networks for virtual machines.
- The OpenStack Networking VXLAN subinterface on the ``bond1`` Additional bridge networks are required for OpenStack-Ansible and those bridges
interface and ``br-vxlan`` bridge with a static IP address. will be connected to these two bonded network interfaces. See the following
section for the bridge configuration.
- The OpenStack Networking VLAN ``br-vlan`` bridge on the ``bond1`` Adding bridges
interface without an IP address. ~~~~~~~~~~~~~~
The reference architecture for target hosts can also contain the The combination of containers and flexible deployment options requires
following optional components: implementation of advanced Linux networking features such as bridges and
namespaces.
- Storage network subinterface on the ``bond0`` interface and *Bridges* provide layer 2 connectivity (similar to switches) among
``br-storage`` bridge with a static IP address. physical, logical, and virtual network interfaces within a host. After
creating a bridge, the network interfaces are virtually "plugged in" to
it.
OpenStack-Ansible uses bridges to connect physical and logical network
interfaces on the host to virtual network interfaces within containers.
*Namespaces* provide logically separate layer 3 environments (similar to
routers) within a host. Namespaces use virtual interfaces to connect
with other namespaces, including the host namespace. These interfaces,
often called ``veth`` pairs, are virtually "plugged in" between
namespaces similar to patch cables connecting physical devices such as
switches and routers.
Each container has a namespace that connects to the host namespace with
one or more ``veth`` pairs. Unless specified, the system generates
random names for ``veth`` pairs.
The following image demonstrates how the container network interfaces are
connected to the host's bridges and to the host's physical network interfaces:
.. image:: figures/networkcomponents.png
Target hosts can contain the following network bridges:
- LXC internal ``lxcbr0``:
- This bridge is **required**, but LXC will configure it automatically.
- Provides external (typically internet) connectivity to containers.
- This bridge does not directly attach to any physical or logical
interfaces on the host because iptables handles connectivity. It
attaches to ``eth0`` in each container, but the container network
interface is configurable in ``openstack_user_config.yml`` in the
``provider_networks`` dictionary.
- Container management ``br-mgmt``:
- This bridge is **required**.
- Provides management of and communication among infrastructure and
OpenStack services.
- Manually created and attaches to a physical or logical interface,
typically a ``bond0`` VLAN subinterface. Also attaches to ``eth1``
in each container. As mentioned earlier, the container network interface
is configurable in ``openstack_user_config.yml``.
- Storage ``br-storage``:
- This bridge is *optional*, but recommended.
- Provides segregated access to block storage devices between
Compute and Block Storage hosts.
- Manually created and attaches to a physical or logical interface,
typically a ``bond0`` VLAN subinterface. Also attaches to ``eth2``
in each associated container. As mentioned earlier, the container network
interface is configurable in ``openstack_user_config.yml``.
- OpenStack Networking tunnel/overlay ``br-vxlan``:
- This bridge is **required**.
- Provides infrastructure for VXLAN tunnel/overlay networks.
- Manually created and attaches to a physical or logical interface,
typically a ``bond1`` VLAN subinterface. Also attaches to
``eth10`` in each associated container. As mentioned earlier, the
container network interface is configurable in
``openstack_user_config.yml``.
- OpenStack Networking provider ``br-vlan``:
- This bridge is **required**.
- Provides infrastructure for VLAN networks.
- Manually created and attaches to a physical or logical interface,
typically ``bond1``. Attaches to ``eth11`` for vlan type networks
in each associated container. It does not contain an IP address because
it only handles layer 2 connectivity. As mentioned earlier, the
container network interface is configurable in
``openstack_user_config.yml``.
- This interface can support flat networks as well, though additional
bridge configuration will be needed. More details are available here:
:ref:`network_configuration`.
Network diagrams
~~~~~~~~~~~~~~~~
The following image shows how all of the interfaces and bridges interconnect
to provide network connectivity to the OpenStack deployment:
.. image:: figures/networkarch-container-external.png
OpenStack-Ansible deploys the compute service on the physical host rather than
in a container. The following image shows how the bridges are used for
network connectivity:
.. image:: figures/networkarch-bare-external.png
The following image shows how the neutron agents work with the bridges
``br-vlan`` and ``br-vxlan``. As a reminder, OpenStack Networking (neutron) is
configured to use a DHCP agent, L3 agent and Linux Bridge agent within a
``networking-agents`` container. You can see how the DHCP agents can provide
information (IP addresses and DNS servers) to the instances, but also how
routing works on the image:
.. image:: figures/networking-neutronagents.png
The following image shows how virtual machines connect to the ``br-vlan`` and
``br-vxlan`` bridges and send traffic to the network outside the host:
.. image:: figures/networking-compute.png
For more information, see `OpenStack-Ansible
Networking <https://github.com/openstack/openstack-ansible/blob/master/etc/network/README.rst>`_.
-------------- --------------