Support cidr_networks in L3 network environments
In some environments, a single container, storage, or tunnel network may not be applicable to every host. Each configured provider_network would need to be limited to a particular subset of hosts and the host var keys within the inventory for container_address, storage_address, and tunnel_address will need to be maintained since they're specifically required by various playbooks. Add two new options for configuring provider_networks, 'reference_group' and 'address_prefix'. 'reference_group' for providing a group name that any host must be a member of, in addition to any of the groups listed in 'group_binds', for the network to be applied. 'address_prefix' for overriding the name of the key created for each IP address allocated by a cidr_network. By default, this key is named 'cidr_network'_address, where 'cidr_network' is the given 'ip_from_q' option for a provider network. Closes-Bug: 1650356 Change-Id: Ia7f3119f0affc4fb6be97ca788ca3b46096b82a8
This commit is contained in:
parent
e5ec0cb0b8
commit
9cd42929c3
@ -1,5 +1,5 @@
|
||||
==================================
|
||||
Appendix G: Advanced configuration
|
||||
Appendix H: Advanced configuration
|
||||
==================================
|
||||
|
||||
.. TODO: include intro on what advanced configuration is, whether it’s required
|
||||
|
@ -1,5 +1,5 @@
|
||||
====================================
|
||||
Appendix H: Ceph-Ansible integration
|
||||
Appendix I: Ceph-Ansible integration
|
||||
====================================
|
||||
|
||||
OpenStack-Ansible allows `Ceph storage <https://ceph.com>`_ cluster integration
|
||||
|
159
deploy-guide/source/app-config-pod.rst
Normal file
159
deploy-guide/source/app-config-pod.rst
Normal file
@ -0,0 +1,159 @@
|
||||
.. _pod-environment-config:
|
||||
|
||||
============================================================
|
||||
Appendix C: Example layer 3 routed environment configuration
|
||||
============================================================
|
||||
|
||||
Introduction
|
||||
~~~~~~~~~~~~
|
||||
|
||||
This appendix describes an example production environment for a working
|
||||
OpenStack-Ansible (OSA) deployment with high availability services where
|
||||
provider networks and connectivity between physical machines are routed
|
||||
(layer 3).
|
||||
|
||||
This example environment has the following characteristics:
|
||||
|
||||
* Three infrastructure (control plane) hosts
|
||||
* Two compute hosts
|
||||
* One NFS storage device
|
||||
* One log aggregation host
|
||||
* Multiple Network Interface Cards (NIC) configured as bonded pairs for each
|
||||
host
|
||||
* Full compute kit with the Telemetry service (ceilometer) included,
|
||||
with NFS configured as a storage backend for the Image (glance), and Block
|
||||
Storage (cinder) services
|
||||
* Static routes are added to allow communication between the Management,
|
||||
Tunnel, and Storage Networks of each pod. The gateway address is the first
|
||||
usable address within each network's subnet.
|
||||
|
||||
.. image:: figures/arch-layout-production.png
|
||||
:width: 100%
|
||||
|
||||
Network configuration
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Network CIDR/VLAN assignments
|
||||
-----------------------------
|
||||
|
||||
The following CIDR assignments are used for this environment.
|
||||
|
||||
+-----------------------------+-----------------+------+
|
||||
| Network | CIDR | VLAN |
|
||||
+=============================+=================+======+
|
||||
| POD 1 Management Network | 172.29.236.0/24 | 10 |
|
||||
+-----------------------------+-----------------+------+
|
||||
| POD 1 Tunnel (VXLAN) Network| 172.29.237.0/24 | 30 |
|
||||
+-----------------------------+-----------------+------+
|
||||
| POD 1 Storage Network | 172.29.238.0/24 | 20 |
|
||||
+-----------------------------+-----------------+------+
|
||||
| POD 2 Management Network | 172.29.239.0/24 | 10 |
|
||||
+-----------------------------+-----------------+------+
|
||||
| POD 2 Tunnel (VXLAN) Network| 172.29.240.0/24 | 30 |
|
||||
+-----------------------------+-----------------+------+
|
||||
| POD 2 Storage Network | 172.29.241.0/24 | 20 |
|
||||
+-----------------------------+-----------------+------+
|
||||
| POD 3 Management Network | 172.29.242.0/24 | 10 |
|
||||
+-----------------------------+-----------------+------+
|
||||
| POD 3 Tunnel (VXLAN) Network| 172.29.243.0/24 | 30 |
|
||||
+-----------------------------+-----------------+------+
|
||||
| POD 3 Storage Network | 172.29.244.0/24 | 20 |
|
||||
+-----------------------------+-----------------+------+
|
||||
| POD 4 Management Network | 172.29.245.0/24 | 10 |
|
||||
+-----------------------------+-----------------+------+
|
||||
| POD 4 Tunnel (VXLAN) Network| 172.29.246.0/24 | 30 |
|
||||
+-----------------------------+-----------------+------+
|
||||
| POD 4 Storage Network | 172.29.247.0/24 | 20 |
|
||||
+-----------------------------+-----------------+------+
|
||||
|
||||
IP assignments
|
||||
--------------
|
||||
|
||||
The following host name and IP address assignments are used for this
|
||||
environment.
|
||||
|
||||
+------------------+----------------+-------------------+----------------+
|
||||
| Host name | Management IP | Tunnel (VxLAN) IP | Storage IP |
|
||||
+==================+================+===================+================+
|
||||
| lb_vip_address | 172.29.236.9 | | |
|
||||
+------------------+----------------+-------------------+----------------+
|
||||
| infra1 | 172.29.236.10 | | |
|
||||
+------------------+----------------+-------------------+----------------+
|
||||
| infra2 | 172.29.239.10 | | |
|
||||
+------------------+----------------+-------------------+----------------+
|
||||
| infra3 | 172.29.242.10 | | |
|
||||
+------------------+----------------+-------------------+----------------+
|
||||
| log1 | 172.29.236.11 | | |
|
||||
+------------------+----------------+-------------------+----------------+
|
||||
| NFS Storage | | | 172.29.244.15 |
|
||||
+------------------+----------------+-------------------+----------------+
|
||||
| compute1 | 172.29.245.10 | 172.29.246.10 | 172.29.247.10 |
|
||||
+------------------+----------------+-------------------+----------------+
|
||||
| compute2 | 172.29.245.11 | 172.29.246.11 | 172.29.247.11 |
|
||||
+------------------+----------------+-------------------+----------------+
|
||||
|
||||
Host network configuration
|
||||
--------------------------
|
||||
|
||||
Each host will require the correct network bridges to be implemented. The
|
||||
following is the ``/etc/network/interfaces`` file for ``infra1``.
|
||||
|
||||
.. note::
|
||||
|
||||
If your environment does not have ``eth0``, but instead has ``p1p1`` or
|
||||
some other interface name, ensure that all references to ``eth0`` in all
|
||||
configuration files are replaced with the appropriate name. The same
|
||||
applies to additional network interfaces.
|
||||
|
||||
.. literalinclude:: ../../etc/network/interfaces.d/openstack_interface.cfg.pod.example
|
||||
|
||||
Deployment configuration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Environment layout
|
||||
------------------
|
||||
|
||||
The ``/etc/openstack_deploy/openstack_user_config.yml`` file defines the
|
||||
environment layout.
|
||||
|
||||
For each pod, a group will need to be defined containing all hosts within that
|
||||
pod.
|
||||
|
||||
Within defined provider networks, ``address_prefix`` is used to override the
|
||||
prefix of the key added to each host that contains IP address information. This
|
||||
should usually be one of either ``container``, ``tunnel``, or ``storage``.
|
||||
``reference_group`` contains the name of a defined pod group and is used to
|
||||
limit the scope of each provider network to that group.
|
||||
|
||||
Static routes are added to allow communication of provider networks between
|
||||
pods.
|
||||
|
||||
The following configuration describes the layout for this environment.
|
||||
|
||||
.. literalinclude:: ../../etc/openstack_deploy/openstack_user_config.yml.pod.example
|
||||
|
||||
Environment customizations
|
||||
--------------------------
|
||||
|
||||
The optionally deployed files in ``/etc/openstack_deploy/env.d`` allow the
|
||||
customization of Ansible groups. This allows the deployer to set whether
|
||||
the services will run in a container (the default), or on the host (on
|
||||
metal).
|
||||
|
||||
For this environment, the ``cinder-volume`` runs in a container on the
|
||||
infrastructure hosts. To achieve this, implement
|
||||
``/etc/openstack_deploy/env.d/cinder.yml`` with the following content:
|
||||
|
||||
.. literalinclude:: ../../etc/openstack_deploy/env.d/cinder-volume.yml.container.example
|
||||
|
||||
User variables
|
||||
--------------
|
||||
|
||||
The ``/etc/openstack_deploy/user_variables.yml`` file defines the global
|
||||
overrides for the default variables.
|
||||
|
||||
For this environment, implement the load balancer on the infrastructure
|
||||
hosts. Ensure that keepalived is also configured with HAProxy in
|
||||
``/etc/openstack_deploy/user_variables.yml`` with the following content.
|
||||
|
||||
.. literalinclude:: ../../etc/openstack_deploy/user_variables.yml.prod.example
|
@ -1,5 +1,5 @@
|
||||
================================================
|
||||
Appendix C: Customizing host and service layouts
|
||||
Appendix D: Customizing host and service layouts
|
||||
================================================
|
||||
|
||||
The default layout of containers and services in OpenStack-Ansible (OSA) is
|
||||
|
@ -1,7 +1,7 @@
|
||||
.. _limited-connectivity-appendix:
|
||||
|
||||
================================================
|
||||
Appendix F: Installing with limited connectivity
|
||||
Appendix G: Installing with limited connectivity
|
||||
================================================
|
||||
|
||||
Many playbooks and roles in OpenStack-Ansible retrieve dependencies from the
|
||||
|
@ -1,7 +1,7 @@
|
||||
.. _network-appendix:
|
||||
|
||||
================================
|
||||
Appendix E: Container networking
|
||||
Appendix F: Container networking
|
||||
================================
|
||||
|
||||
OpenStack-Ansible deploys Linux containers (LXC) and uses Linux
|
||||
|
@ -1,5 +1,5 @@
|
||||
================================
|
||||
Appendix I: Additional resources
|
||||
Appendix J: Additional resources
|
||||
================================
|
||||
|
||||
Ansible resources:
|
||||
|
@ -1,5 +1,5 @@
|
||||
====================
|
||||
Appendix D: Security
|
||||
Appendix E: Security
|
||||
====================
|
||||
|
||||
Security is one of the top priorities within OpenStack-Ansible (OSA), and many
|
||||
@ -8,7 +8,7 @@ default. This appendix provides a detailed overview of the most important
|
||||
security enhancements.
|
||||
|
||||
For more information about configuring security, see
|
||||
:deploy_guide:`Appendix G <app-advanced-config-options.html>`.
|
||||
:deploy_guide:`Appendix H <app-advanced-config-options.html>`.
|
||||
|
||||
.. note::
|
||||
|
||||
|
@ -7,6 +7,7 @@ Appendices
|
||||
|
||||
app-config-test.rst
|
||||
app-config-prod.rst
|
||||
app-config-pod.rst
|
||||
app-custom-layouts.rst
|
||||
app-security.rst
|
||||
app-networking.rst
|
||||
|
@ -50,8 +50,8 @@ these services include databases, Memcached, and RabbitMQ. Several other
|
||||
host types contain other types of containers, and all of these are listed
|
||||
in the ``openstack_user_config.yml`` file.
|
||||
|
||||
For examples, please see :ref:`test-environment-config` and
|
||||
:ref:`production-environment-config`.
|
||||
For examples, please see :ref:`test-environment-config`,
|
||||
:ref:`production-environment-config`, and :ref:`pod-environment-config`
|
||||
|
||||
For details about how the inventory is generated from the environment
|
||||
configuration, see
|
||||
|
157
etc/network/interfaces.d/openstack_interface.cfg.pod.example
Normal file
157
etc/network/interfaces.d/openstack_interface.cfg.pod.example
Normal file
@ -0,0 +1,157 @@
|
||||
# This is a multi-NIC bonded configuration to implement the required bridges
|
||||
# for OpenStack-Ansible. This illustrates the configuration of the first
|
||||
# Infrastructure host and the IP addresses assigned should be adapted
|
||||
# for implementation on the other hosts.
|
||||
#
|
||||
# After implementing this configuration, the host will need to be
|
||||
# rebooted.
|
||||
|
||||
# Assuming that eth0/1 and eth2/3 are dual port NIC's we pair
|
||||
# eth0 with eth2 and eth1 with eth3 for increased resiliency
|
||||
# in the case of one interface card failing.
|
||||
auto eth0
|
||||
iface eth0 inet manual
|
||||
bond-master bond0
|
||||
bond-primary eth0
|
||||
|
||||
auto eth1
|
||||
iface eth1 inet manual
|
||||
bond-master bond1
|
||||
bond-primary eth1
|
||||
|
||||
auto eth2
|
||||
iface eth2 inet manual
|
||||
bond-master bond0
|
||||
|
||||
auto eth3
|
||||
iface eth3 inet manual
|
||||
bond-master bond1
|
||||
|
||||
# Create a bonded interface. Note that the "bond-slaves" is set to none. This
|
||||
# is because the bond-master has already been set in the raw interfaces for
|
||||
# the new bond0.
|
||||
auto bond0
|
||||
iface bond0 inet manual
|
||||
bond-slaves none
|
||||
bond-mode active-backup
|
||||
bond-miimon 100
|
||||
bond-downdelay 200
|
||||
bond-updelay 200
|
||||
|
||||
# This bond will carry VLAN and VXLAN traffic to ensure isolation from
|
||||
# control plane traffic on bond0.
|
||||
auto bond1
|
||||
iface bond1 inet manual
|
||||
bond-slaves none
|
||||
bond-mode active-backup
|
||||
bond-miimon 100
|
||||
bond-downdelay 250
|
||||
bond-updelay 250
|
||||
|
||||
# Container/Host management VLAN interface
|
||||
auto bond0.10
|
||||
iface bond0.10 inet manual
|
||||
vlan-raw-device bond0
|
||||
|
||||
# OpenStack Networking VXLAN (tunnel/overlay) VLAN interface
|
||||
auto bond1.30
|
||||
iface bond1.30 inet manual
|
||||
vlan-raw-device bond1
|
||||
|
||||
# Storage network VLAN interface (optional)
|
||||
auto bond0.20
|
||||
iface bond0.20 inet manual
|
||||
vlan-raw-device bond0
|
||||
|
||||
# Container/Host management bridge
|
||||
auto br-mgmt
|
||||
iface br-mgmt inet static
|
||||
bridge_stp off
|
||||
bridge_waitport 0
|
||||
bridge_fd 0
|
||||
bridge_ports bond0.10
|
||||
address 172.29.236.10
|
||||
netmask 255.255.255.0
|
||||
gateway 172.29.236.1
|
||||
dns-nameservers 8.8.8.8 8.8.4.4
|
||||
|
||||
# OpenStack Networking VXLAN (tunnel/overlay) bridge
|
||||
#
|
||||
# Only the COMPUTE and NETWORK nodes must have an IP address
|
||||
# on this bridge. When used by infrastructure nodes, the
|
||||
# IP addresses are assigned to containers which use this
|
||||
# bridge.
|
||||
#
|
||||
auto br-vxlan
|
||||
iface br-vxlan inet manual
|
||||
bridge_stp off
|
||||
bridge_waitport 0
|
||||
bridge_fd 0
|
||||
bridge_ports bond1.30
|
||||
|
||||
# compute1 VXLAN (tunnel/overlay) bridge config
|
||||
#auto br-vxlan
|
||||
#iface br-vxlan inet static
|
||||
# bridge_stp off
|
||||
# bridge_waitport 0
|
||||
# bridge_fd 0
|
||||
# bridge_ports bond1.30
|
||||
# address 172.29.246.10
|
||||
# netmask 255.255.255.0
|
||||
|
||||
# OpenStack Networking VLAN bridge
|
||||
auto br-vlan
|
||||
iface br-vlan inet manual
|
||||
bridge_stp off
|
||||
bridge_waitport 0
|
||||
bridge_fd 0
|
||||
bridge_ports bond1
|
||||
|
||||
# compute1 Network VLAN bridge
|
||||
#auto br-vlan
|
||||
#iface br-vlan inet manual
|
||||
# bridge_stp off
|
||||
# bridge_waitport 0
|
||||
# bridge_fd 0
|
||||
#
|
||||
# For tenant vlan support, create a veth pair to be used when the neutron
|
||||
# agent is not containerized on the compute hosts. 'eth12' is the value used on
|
||||
# the host_bind_override parameter of the br-vlan network section of the
|
||||
# openstack_user_config example file. The veth peer name must match the value
|
||||
# specified on the host_bind_override parameter.
|
||||
#
|
||||
# When the neutron agent is containerized it will use the container_interface
|
||||
# value of the br-vlan network, which is also the same 'eth12' value.
|
||||
#
|
||||
# Create veth pair, do not abort if already exists
|
||||
# pre-up ip link add br-vlan-veth type veth peer name eth12 || true
|
||||
# Set both ends UP
|
||||
# pre-up ip link set br-vlan-veth up
|
||||
# pre-up ip link set eth12 up
|
||||
# Delete veth pair on DOWN
|
||||
# post-down ip link del br-vlan-veth || true
|
||||
# bridge_ports bond1 br-vlan-veth
|
||||
|
||||
# Storage bridge (optional)
|
||||
#
|
||||
# Only the COMPUTE and STORAGE nodes must have an IP address
|
||||
# on this bridge. When used by infrastructure nodes, the
|
||||
# IP addresses are assigned to containers which use this
|
||||
# bridge.
|
||||
#
|
||||
auto br-storage
|
||||
iface br-storage inet manual
|
||||
bridge_stp off
|
||||
bridge_waitport 0
|
||||
bridge_fd 0
|
||||
bridge_ports bond0.20
|
||||
|
||||
# compute1 Storage bridge
|
||||
#auto br-storage
|
||||
#iface br-storage inet static
|
||||
# bridge_stp off
|
||||
# bridge_waitport 0
|
||||
# bridge_fd 0
|
||||
# bridge_ports bond0.20
|
||||
# address 172.29.247.10
|
||||
# netmask 255.255.255.0
|
@ -168,6 +168,13 @@
|
||||
# Name of network in 'cidr_networks' level to use for IP address pool. Only
|
||||
# valid for 'raw' and 'vxlan' types.
|
||||
#
|
||||
# Option: address_prefix (option, string)
|
||||
# Override for the prefix of the key added to each host that contains IP
|
||||
# address information for this network. By default, this will be the name
|
||||
# given in 'ip_from_q' with a fallback of the name of the interface given in
|
||||
# 'container_interface'.
|
||||
# (e.g., 'ip_from_q'_address and 'container_interface'_address)
|
||||
#
|
||||
# Option: is_container_address (required, boolean)
|
||||
# If true, the load balancer uses this IP address to access services
|
||||
# in the container. Only valid for networks with 'ip_from_q' option.
|
||||
@ -180,6 +187,10 @@
|
||||
# List of one or more Ansible groups that contain this
|
||||
# network. For more information, see the env.d YAML files.
|
||||
#
|
||||
# Option: reference_group (optional, string)
|
||||
# An Ansible group that a host must be a member of, in addition to any of the
|
||||
# groups within 'group_binds', for this network to apply.
|
||||
#
|
||||
# Option: net_name (optional, string)
|
||||
# Name of network for 'flat' or 'vlan' types. Only valid for these
|
||||
# types. Coincides with ML2 plug-in configuration options.
|
||||
|
511
etc/openstack_deploy/openstack_user_config.yml.pod.example
Normal file
511
etc/openstack_deploy/openstack_user_config.yml.pod.example
Normal file
@ -0,0 +1,511 @@
|
||||
---
|
||||
cidr_networks:
|
||||
pod1_container: 172.29.236.0/24
|
||||
pod2_container: 172.29.237.0/24
|
||||
pod3_container: 172.29.238.0/24
|
||||
pod4_container: 172.29.239.0/24
|
||||
pod1_tunnel: 172.29.240.0/24
|
||||
pod2_tunnel: 172.29.241.0/24
|
||||
pod3_tunnel: 172.29.242.0/24
|
||||
pod4_tunnel: 172.29.243.0/24
|
||||
pod1_storage: 172.29.244.0/24
|
||||
pod2_storage: 172.29.245.0/24
|
||||
pod3_storage: 172.29.246.0/24
|
||||
pod4_storage: 172.29.247.0/24
|
||||
|
||||
used_ips:
|
||||
- "172.29.236.1,172.29.236.50"
|
||||
- "172.29.237.1,172.29.237.50"
|
||||
- "172.29.238.1,172.29.238.50"
|
||||
- "172.29.239.1,172.29.239.50"
|
||||
- "172.29.240.1,172.29.240.50"
|
||||
- "172.29.241.1,172.29.241.50"
|
||||
- "172.29.242.1,172.29.242.50"
|
||||
- "172.29.243.1,172.29.243.50"
|
||||
- "172.29.244.1,172.29.244.50"
|
||||
- "172.29.245.1,172.29.245.50"
|
||||
- "172.29.246.1,172.29.246.50"
|
||||
- "172.29.247.1,172.29.247.50"
|
||||
|
||||
global_overrides:
|
||||
internal_lb_vip_address: internal-openstack.example.com
|
||||
#
|
||||
# The below domain name must resolve to an IP address
|
||||
# in the CIDR specified in haproxy_keepalived_external_vip_cidr.
|
||||
# If using different protocols (https/http) for the public/internal
|
||||
# endpoints the two addresses must be different.
|
||||
#
|
||||
external_lb_vip_address: openstack.example.com
|
||||
tunnel_bridge: "br-vxlan"
|
||||
management_bridge: "br-mgmt"
|
||||
provider_networks:
|
||||
- network:
|
||||
container_bridge: "br-mgmt"
|
||||
container_type: "veth"
|
||||
container_interface: "eth1"
|
||||
ip_from_q: "pod1_container"
|
||||
address_prefix: "container"
|
||||
type: "raw"
|
||||
group_binds:
|
||||
- all_containers
|
||||
- hosts
|
||||
reference_group: "pod1_hosts"
|
||||
is_container_address: true
|
||||
is_ssh_address: true
|
||||
# Containers in pod1 need routes to the container networks of other pods
|
||||
static_routes:
|
||||
# Route to container networks
|
||||
- cidr: 172.29.236.0/22
|
||||
gateway: 172.29.236.1
|
||||
- network:
|
||||
container_bridge: "br-mgmt"
|
||||
container_type: "veth"
|
||||
container_interface: "eth1"
|
||||
ip_from_q: "pod2_container"
|
||||
address_prefix: "container"
|
||||
type: "raw"
|
||||
group_binds:
|
||||
- all_containers
|
||||
- hosts
|
||||
reference_group: "pod2_hosts"
|
||||
is_container_address: true
|
||||
is_ssh_address: true
|
||||
# Containers in pod2 need routes to the container networks of other pods
|
||||
static_routes:
|
||||
# Route to container networks
|
||||
- cidr: 172.29.236.0/22
|
||||
gateway: 172.29.237.1
|
||||
- network:
|
||||
container_bridge: "br-mgmt"
|
||||
container_type: "veth"
|
||||
container_interface: "eth1"
|
||||
ip_from_q: "pod3_container"
|
||||
address_prefix: "container"
|
||||
type: "raw"
|
||||
group_binds:
|
||||
- all_containers
|
||||
- hosts
|
||||
reference_group: "pod3_hosts"
|
||||
is_container_address: true
|
||||
is_ssh_address: true
|
||||
# Containers in pod3 need routes to the container networks of other pods
|
||||
static_routes:
|
||||
# Route to container networks
|
||||
- cidr: 172.29.236.0/22
|
||||
gateway: 172.29.238.1
|
||||
- network:
|
||||
container_bridge: "br-mgmt"
|
||||
container_type: "veth"
|
||||
container_interface: "eth1"
|
||||
ip_from_q: "pod4_container"
|
||||
address_prefix: "container"
|
||||
type: "raw"
|
||||
group_binds:
|
||||
- all_containers
|
||||
- hosts
|
||||
reference_group: "pod4_hosts"
|
||||
is_container_address: true
|
||||
is_ssh_address: true
|
||||
# Containers in pod4 need routes to the container networks of other pods
|
||||
static_routes:
|
||||
# Route to container networks
|
||||
- cidr: 172.29.236.0/22
|
||||
gateway: 172.29.239.1
|
||||
- network:
|
||||
container_bridge: "br-vxlan"
|
||||
container_type: "veth"
|
||||
container_interface: "eth10"
|
||||
ip_from_q: "pod1_tunnel"
|
||||
address_prefix: "tunnel"
|
||||
type: "vxlan"
|
||||
range: "1:1000"
|
||||
net_name: "vxlan"
|
||||
group_binds:
|
||||
- neutron_linuxbridge_agent
|
||||
reference_group: "pod1_hosts"
|
||||
# Containers in pod1 need routes to the tunnel networks of other pods
|
||||
static_routes:
|
||||
# Route to tunnel networks
|
||||
- cidr: 172.29.240.0/22
|
||||
gateway: 172.29.240.1
|
||||
- network:
|
||||
container_bridge: "br-vxlan"
|
||||
container_type: "veth"
|
||||
container_interface: "eth10"
|
||||
ip_from_q: "pod2_tunnel"
|
||||
address_prefix: "tunnel"
|
||||
type: "vxlan"
|
||||
range: "1:1000"
|
||||
net_name: "vxlan"
|
||||
group_binds:
|
||||
- neutron_linuxbridge_agent
|
||||
reference_group: "pod2_hosts"
|
||||
# Containers in pod2 need routes to the tunnel networks of other pods
|
||||
static_routes:
|
||||
# Route to tunnel networks
|
||||
- cidr: 172.29.240.0/22
|
||||
gateway: 172.29.241.1
|
||||
- network:
|
||||
container_bridge: "br-vxlan"
|
||||
container_type: "veth"
|
||||
container_interface: "eth10"
|
||||
ip_from_q: "pod3_tunnel"
|
||||
address_prefix: "tunnel"
|
||||
type: "vxlan"
|
||||
range: "1:1000"
|
||||
net_name: "vxlan"
|
||||
group_binds:
|
||||
- neutron_linuxbridge_agent
|
||||
reference_group: "pod3_hosts"
|
||||
# Containers in pod3 need routes to the tunnel networks of other pods
|
||||
static_routes:
|
||||
# Route to tunnel networks
|
||||
- cidr: 172.29.240.0/22
|
||||
gateway: 172.29.242.1
|
||||
- network:
|
||||
container_bridge: "br-vxlan"
|
||||
container_type: "veth"
|
||||
container_interface: "eth10"
|
||||
ip_from_q: "pod4_tunnel"
|
||||
address_prefix: "tunnel"
|
||||
type: "vxlan"
|
||||
range: "1:1000"
|
||||
net_name: "vxlan"
|
||||
group_binds:
|
||||
- neutron_linuxbridge_agent
|
||||
reference_group: "pod4_hosts"
|
||||
# Containers in pod4 need routes to the tunnel networks of other pods
|
||||
static_routes:
|
||||
# Route to tunnel networks
|
||||
- cidr: 172.29.240.0/22
|
||||
gateway: 172.29.243.1
|
||||
- network:
|
||||
container_bridge: "br-vlan"
|
||||
container_type: "veth"
|
||||
container_interface: "eth12"
|
||||
host_bind_override: "eth12"
|
||||
type: "flat"
|
||||
net_name: "flat"
|
||||
group_binds:
|
||||
- neutron_linuxbridge_agent
|
||||
- network:
|
||||
container_bridge: "br-vlan"
|
||||
container_type: "veth"
|
||||
container_interface: "eth11"
|
||||
type: "vlan"
|
||||
range: "1:1"
|
||||
net_name: "vlan"
|
||||
group_binds:
|
||||
- neutron_linuxbridge_agent
|
||||
- network:
|
||||
container_bridge: "br-storage"
|
||||
container_type: "veth"
|
||||
container_interface: "eth2"
|
||||
ip_from_q: "pod1_storage"
|
||||
address_prefix: "storage"
|
||||
type: "raw"
|
||||
group_binds:
|
||||
- glance_api
|
||||
- cinder_api
|
||||
- cinder_volume
|
||||
- nova_compute
|
||||
reference_group: "pod1_hosts"
|
||||
# Containers in pod1 need routes to the storage networks of other pods
|
||||
static_routes:
|
||||
# Route to storage networks
|
||||
- cidr: 172.29.244.0/22
|
||||
gateway: 172.29.244.1
|
||||
- network:
|
||||
container_bridge: "br-storage"
|
||||
container_type: "veth"
|
||||
container_interface: "eth2"
|
||||
ip_from_q: "pod2_storage"
|
||||
address_prefix: "storage"
|
||||
type: "raw"
|
||||
group_binds:
|
||||
- glance_api
|
||||
- cinder_api
|
||||
- cinder_volume
|
||||
- nova_compute
|
||||
reference_group: "pod2_hosts"
|
||||
# Containers in pod2 need routes to the storage networks of other pods
|
||||
static_routes:
|
||||
# Route to storage networks
|
||||
- cidr: 172.29.244.0/22
|
||||
gateway: 172.29.245.1
|
||||
- network:
|
||||
container_bridge: "br-storage"
|
||||
container_type: "veth"
|
||||
container_interface: "eth2"
|
||||
ip_from_q: "pod3_storage"
|
||||
address_prefix: "storage"
|
||||
type: "raw"
|
||||
group_binds:
|
||||
- glance_api
|
||||
- cinder_api
|
||||
- cinder_volume
|
||||
- nova_compute
|
||||
reference_group: "pod3_hosts"
|
||||
# Containers in pod3 need routes to the storage networks of other pods
|
||||
static_routes:
|
||||
# Route to storage networks
|
||||
- cidr: 172.29.244.0/22
|
||||
gateway: 172.29.246.1
|
||||
- network:
|
||||
container_bridge: "br-storage"
|
||||
container_type: "veth"
|
||||
container_interface: "eth2"
|
||||
ip_from_q: "pod4_storage"
|
||||
address_prefix: "storage"
|
||||
type: "raw"
|
||||
group_binds:
|
||||
- glance_api
|
||||
- cinder_api
|
||||
- cinder_volume
|
||||
- nova_compute
|
||||
reference_group: "pod4_hosts"
|
||||
# Containers in pod4 need routes to the storage networks of other pods
|
||||
static_routes:
|
||||
# Route to storage networks
|
||||
- cidr: 172.29.244.0/22
|
||||
gateway: 172.29.247.1
|
||||
|
||||
###
|
||||
### Infrastructure
|
||||
###
|
||||
|
||||
pod1_hosts:
|
||||
infra1:
|
||||
ip: 172.29.236.10
|
||||
log1:
|
||||
ip: 172.29.236.11
|
||||
|
||||
pod2_hosts:
|
||||
infra2:
|
||||
ip: 172.29.239.10
|
||||
|
||||
pod3_hosts:
|
||||
infra3:
|
||||
ip: 172.29.242.10
|
||||
|
||||
pod4_hosts:
|
||||
compute1:
|
||||
ip: 172.29.245.10
|
||||
compute2:
|
||||
ip: 172.29.245.11
|
||||
|
||||
# galera, memcache, rabbitmq, utility
|
||||
shared-infra_hosts:
|
||||
infra1:
|
||||
ip: 172.29.236.10
|
||||
infra2:
|
||||
ip: 172.29.239.10
|
||||
infra3:
|
||||
ip: 172.29.242.10
|
||||
|
||||
# repository (apt cache, python packages, etc)
|
||||
repo-infra_hosts:
|
||||
infra1:
|
||||
ip: 172.29.236.10
|
||||
infra2:
|
||||
ip: 172.29.239.10
|
||||
infra3:
|
||||
ip: 172.29.242.10
|
||||
|
||||
# load balancer
|
||||
# Ideally the load balancer should not use the Infrastructure hosts.
|
||||
# Dedicated hardware is best for improved performance and security.
|
||||
haproxy_hosts:
|
||||
infra1:
|
||||
ip: 172.29.236.10
|
||||
infra2:
|
||||
ip: 172.29.239.10
|
||||
infra3:
|
||||
ip: 172.29.242.10
|
||||
|
||||
# rsyslog server
|
||||
log_hosts:
|
||||
log1:
|
||||
ip: 172.29.236.11
|
||||
|
||||
###
|
||||
### OpenStack
|
||||
###
|
||||
|
||||
# keystone
|
||||
identity_hosts:
|
||||
infra1:
|
||||
ip: 172.29.236.10
|
||||
infra2:
|
||||
ip: 172.29.239.10
|
||||
infra3:
|
||||
ip: 172.29.242.10
|
||||
|
||||
# cinder api services
|
||||
storage-infra_hosts:
|
||||
infra1:
|
||||
ip: 172.29.236.10
|
||||
infra2:
|
||||
ip: 172.29.239.10
|
||||
infra3:
|
||||
ip: 172.29.242.10
|
||||
|
||||
# glance
|
||||
# The settings here are repeated for each infra host.
|
||||
# They could instead be applied as global settings in
|
||||
# user_variables, but are left here to illustrate that
|
||||
# each container could have different storage targets.
|
||||
image_hosts:
|
||||
infra1:
|
||||
ip: 172.29.236.11
|
||||
container_vars:
|
||||
limit_container_types: glance
|
||||
glance_nfs_client:
|
||||
- server: "172.29.244.15"
|
||||
remote_path: "/images"
|
||||
local_path: "/var/lib/glance/images"
|
||||
type: "nfs"
|
||||
options: "_netdev,auto"
|
||||
infra2:
|
||||
ip: 172.29.236.12
|
||||
container_vars:
|
||||
limit_container_types: glance
|
||||
glance_nfs_client:
|
||||
- server: "172.29.244.15"
|
||||
remote_path: "/images"
|
||||
local_path: "/var/lib/glance/images"
|
||||
type: "nfs"
|
||||
options: "_netdev,auto"
|
||||
infra3:
|
||||
ip: 172.29.236.13
|
||||
container_vars:
|
||||
limit_container_types: glance
|
||||
glance_nfs_client:
|
||||
- server: "172.29.244.15"
|
||||
remote_path: "/images"
|
||||
local_path: "/var/lib/glance/images"
|
||||
type: "nfs"
|
||||
options: "_netdev,auto"
|
||||
|
||||
# nova api, conductor, etc services
|
||||
compute-infra_hosts:
|
||||
infra1:
|
||||
ip: 172.29.236.10
|
||||
infra2:
|
||||
ip: 172.29.239.10
|
||||
infra3:
|
||||
ip: 172.29.242.10
|
||||
|
||||
# heat
|
||||
orchestration_hosts:
|
||||
infra1:
|
||||
ip: 172.29.236.10
|
||||
infra2:
|
||||
ip: 172.29.239.10
|
||||
infra3:
|
||||
ip: 172.29.242.10
|
||||
|
||||
# horizon
|
||||
dashboard_hosts:
|
||||
infra1:
|
||||
ip: 172.29.236.10
|
||||
infra2:
|
||||
ip: 172.29.239.10
|
||||
infra3:
|
||||
ip: 172.29.242.10
|
||||
|
||||
# neutron server, agents (L3, etc)
|
||||
network_hosts:
|
||||
infra1:
|
||||
ip: 172.29.236.10
|
||||
infra2:
|
||||
ip: 172.29.239.10
|
||||
infra3:
|
||||
ip: 172.29.242.10
|
||||
|
||||
# ceilometer (telemetry data collection)
|
||||
metering-infra_hosts:
|
||||
infra1:
|
||||
ip: 172.29.236.10
|
||||
infra2:
|
||||
ip: 172.29.239.10
|
||||
infra3:
|
||||
ip: 172.29.242.10
|
||||
|
||||
# aodh (telemetry alarm service)
|
||||
metering-alarm_hosts:
|
||||
infra1:
|
||||
ip: 172.29.236.10
|
||||
infra2:
|
||||
ip: 172.29.239.10
|
||||
infra3:
|
||||
ip: 172.29.242.10
|
||||
|
||||
# gnocchi (telemetry metrics storage)
|
||||
metrics_hosts:
|
||||
infra1:
|
||||
ip: 172.29.236.10
|
||||
infra2:
|
||||
ip: 172.29.239.10
|
||||
infra3:
|
||||
ip: 172.29.242.10
|
||||
|
||||
# nova hypervisors
|
||||
compute_hosts:
|
||||
compute1:
|
||||
ip: 172.29.245.10
|
||||
compute2:
|
||||
ip: 172.29.245.11
|
||||
|
||||
# ceilometer compute agent (telemetry data collection)
|
||||
metering-compute_hosts:
|
||||
compute1:
|
||||
ip: 172.29.245.10
|
||||
compute2:
|
||||
ip: 172.29.245.11
|
||||
|
||||
# cinder volume hosts (NFS-backed)
|
||||
# The settings here are repeated for each infra host.
|
||||
# They could instead be applied as global settings in
|
||||
# user_variables, but are left here to illustrate that
|
||||
# each container could have different storage targets.
|
||||
storage_hosts:
|
||||
infra1:
|
||||
ip: 172.29.236.11
|
||||
container_vars:
|
||||
cinder_backends:
|
||||
limit_container_types: cinder_volume
|
||||
nfs_volume:
|
||||
volume_backend_name: NFS_VOLUME1
|
||||
volume_driver: cinder.volume.drivers.nfs.NfsDriver
|
||||
nfs_mount_options: "rsize=65535,wsize=65535,timeo=1200,actimeo=120"
|
||||
nfs_shares_config: /etc/cinder/nfs_shares
|
||||
shares:
|
||||
- ip: "172.29.244.15"
|
||||
share: "/vol/cinder"
|
||||
infra2:
|
||||
ip: 172.29.236.12
|
||||
container_vars:
|
||||
cinder_backends:
|
||||
limit_container_types: cinder_volume
|
||||
nfs_volume:
|
||||
volume_backend_name: NFS_VOLUME1
|
||||
volume_driver: cinder.volume.drivers.nfs.NfsDriver
|
||||
nfs_mount_options: "rsize=65535,wsize=65535,timeo=1200,actimeo=120"
|
||||
nfs_shares_config: /etc/cinder/nfs_shares
|
||||
shares:
|
||||
- ip: "172.29.244.15"
|
||||
share: "/vol/cinder"
|
||||
infra3:
|
||||
ip: 172.29.236.13
|
||||
container_vars:
|
||||
cinder_backends:
|
||||
limit_container_types: cinder_volume
|
||||
nfs_volume:
|
||||
volume_backend_name: NFS_VOLUME1
|
||||
volume_driver: cinder.volume.drivers.nfs.NfsDriver
|
||||
nfs_mount_options: "rsize=65535,wsize=65535,timeo=1200,actimeo=120"
|
||||
nfs_shares_config: /etc/cinder/nfs_shares
|
||||
shares:
|
||||
- ip: "172.29.244.15"
|
||||
share: "/vol/cinder"
|
@ -532,7 +532,7 @@ def network_entry(is_metal, interface,
|
||||
def _add_additional_networks(key, inventory, ip_q, q_name, netmask, interface,
|
||||
bridge, net_type, net_mtu, user_config,
|
||||
is_ssh_address, is_container_address,
|
||||
static_routes):
|
||||
static_routes, reference_group, address_prefix):
|
||||
"""Process additional ip adds and append then to hosts as needed.
|
||||
|
||||
If the host is found to be "is_metal" it will be marked as "on_metal"
|
||||
@ -549,6 +549,8 @@ def _add_additional_networks(key, inventory, ip_q, q_name, netmask, interface,
|
||||
:param is_ssh_address: ``bol`` set this address as ansible_host.
|
||||
:param is_container_address: ``bol`` set this address to container_address.
|
||||
:param static_routes: ``list`` List containing static route dicts.
|
||||
:param reference_group: ``str`` group to filter membership of host against.
|
||||
:param address_prefix: ``str`` override prefix of key for network address.
|
||||
"""
|
||||
|
||||
base_hosts = inventory['_meta']['hostvars']
|
||||
@ -569,7 +571,9 @@ def _add_additional_networks(key, inventory, ip_q, q_name, netmask, interface,
|
||||
user_config,
|
||||
is_ssh_address,
|
||||
is_container_address,
|
||||
static_routes
|
||||
static_routes,
|
||||
reference_group,
|
||||
address_prefix
|
||||
)
|
||||
|
||||
# Make sure the lookup object has a value.
|
||||
@ -580,8 +584,10 @@ def _add_additional_networks(key, inventory, ip_q, q_name, netmask, interface,
|
||||
else:
|
||||
return
|
||||
|
||||
if address_prefix:
|
||||
old_address = '%s_address' % address_prefix
|
||||
# TODO(cloudnull) after a few releases this should be removed.
|
||||
if q_name:
|
||||
elif q_name:
|
||||
old_address = '{}_address'.format(q_name)
|
||||
else:
|
||||
old_address = '{}_address'.format(interface)
|
||||
@ -589,6 +595,12 @@ def _add_additional_networks(key, inventory, ip_q, q_name, netmask, interface,
|
||||
for container_host in hosts:
|
||||
container = base_hosts[container_host]
|
||||
|
||||
physical_host = container.get('physical_host')
|
||||
if (reference_group and
|
||||
physical_host not in
|
||||
inventory.get(reference_group).get('hosts')):
|
||||
continue
|
||||
|
||||
# TODO(cloudnull) after a few releases this should be removed.
|
||||
# This removes the old container network value that now serves purpose.
|
||||
container.pop('container_network', None)
|
||||
@ -727,7 +739,9 @@ def container_skel_load(container_skel, inventory, config):
|
||||
user_config=config,
|
||||
is_ssh_address=p_net.get('is_ssh_address'),
|
||||
is_container_address=p_net.get('is_container_address'),
|
||||
static_routes=p_net.get('static_routes')
|
||||
static_routes=p_net.get('static_routes'),
|
||||
reference_group=p_net.get('reference_group'),
|
||||
address_prefix=p_net.get('address_prefix')
|
||||
)
|
||||
|
||||
populate_lxc_hosts(inventory)
|
||||
@ -1065,11 +1079,27 @@ def main(config=None, check=False, debug=False, environment=None, **kwargs):
|
||||
if not cidr_networks:
|
||||
raise SystemExit('No container CIDR specified in user config')
|
||||
|
||||
user_cidr = None
|
||||
if 'container' in cidr_networks:
|
||||
user_cidr = cidr_networks['container']
|
||||
elif 'management' in cidr_networks:
|
||||
user_cidr = cidr_networks['management']
|
||||
else:
|
||||
overrides = user_defined_config.get('global_overrides')
|
||||
pns = overrides.get('provider_networks', list())
|
||||
for pn in pns:
|
||||
p_net = pn.get('network')
|
||||
if not p_net:
|
||||
continue
|
||||
q_name = p_net.get('ip_from_q')
|
||||
if q_name and q_name in cidr_networks:
|
||||
if (p_net.get('address_prefix') in ('container',
|
||||
'management')):
|
||||
if user_cidr is None:
|
||||
user_cidr = []
|
||||
user_cidr.append(cidr_networks[q_name])
|
||||
|
||||
if user_cidr is None:
|
||||
raise SystemExit('No container or management network '
|
||||
'specified in user config.')
|
||||
|
||||
|
@ -557,10 +557,21 @@ class TestConfigCheckBase(unittest.TestCase):
|
||||
self.user_defined_config[key] = value
|
||||
self.write_config()
|
||||
|
||||
def add_provider_network(self, net_name, cidr):
|
||||
self.user_defined_config['cidr_networks'][net_name] = cidr
|
||||
self.write_config()
|
||||
|
||||
def delete_provider_network(self, net_name):
|
||||
del self.user_defined_config['cidr_networks'][net_name]
|
||||
self.write_config()
|
||||
|
||||
def add_provider_network_key(self, net_name, key, value):
|
||||
pns = self.user_defined_config['global_overrides']['provider_networks']
|
||||
for net in pns:
|
||||
if 'ip_from_q' in net['network']:
|
||||
if net['network']['ip_from_q'] == net_name:
|
||||
net['network'][key] = value
|
||||
|
||||
def delete_provider_network_key(self, net_name, key):
|
||||
pns = self.user_defined_config['global_overrides']['provider_networks']
|
||||
for net in pns:
|
||||
@ -1401,6 +1412,33 @@ class TestInventoryGroupConstraints(unittest.TestCase):
|
||||
|
||||
self.assertTrue(result)
|
||||
|
||||
class TestL3ProviderNetworkConfig(TestConfigCheckBase):
|
||||
def setUp(self):
|
||||
super(TestL3ProviderNetworkConfig, self).setUp()
|
||||
self.delete_provider_network('container')
|
||||
self.add_provider_network('pod1_container', '172.29.236.0/22')
|
||||
self.add_provider_network_key('container', 'ip_from_q',
|
||||
'pod1_container')
|
||||
self.add_provider_network_key('pod1_container', 'address_prefix',
|
||||
'management')
|
||||
self.add_provider_network_key('pod1_container', 'reference_group',
|
||||
'pod1_hosts')
|
||||
self.add_config_key('pod1_hosts', {})
|
||||
self.add_host('pod1_hosts', 'aio2', '172.29.236.101')
|
||||
self.add_host('compute_hosts', 'aio2', '172.29.236.101')
|
||||
self.write_config()
|
||||
self.inventory = get_inventory()
|
||||
|
||||
def test_address_prefix_name_applied(self):
|
||||
aio2_host_vars = self.inventory['_meta']['hostvars']['aio2']
|
||||
aio2_container_networks = aio2_host_vars['container_networks']
|
||||
self.assertIsInstance(aio2_container_networks['management_address'],
|
||||
dict)
|
||||
|
||||
def test_host_outside_reference_group_excluded(self):
|
||||
aio1_host_vars = self.inventory['_meta']['hostvars']['aio1']
|
||||
aio1_container_networks = aio1_host_vars['container_networks']
|
||||
self.assertNotIn('management_address', aio1_container_networks)
|
||||
|
||||
if __name__ == '__main__':
|
||||
unittest.main(catchbreak=True)
|
||||
|
Loading…
Reference in New Issue
Block a user