vmware-nsx/quantum/plugins/linuxbridge
Bob Kukura 88cd4f8504 add local network type and use by default for tenant networks
Fixes bug 1045142.

This patch adds 'local' as a new value for provider:network_type,
supported by the openvswitch and linuxbridge plugins. Networks of this
type provide connectivity through a bridge for VMs and agents local to
the host, but no external connectivity. They do not require
provider:physical_network or provider:segmentation_id values. These
local networks are intended mainly to support single-box
zero-configuration testing (including quantum gating), but may have
other uses as well.

For openvswitch, the new OVS.tenant_network_type configuration
variable selects what type of networks are allocated as tenant
(i.e. non-provider) networks. It defaults to 'local', but must be
changed to 'vlan' or 'gre' for openvswitch tenant networks to have
external connectivity. The default value is intended to support
single-box zero-configuration testing without any need to allocate
physical network resources or configure bridges, and without requiring
the operating system to support OVS GRE tunneling. It can also be set
to 'none' to completely disable creation of tenant networks.

For linuxbridge, the new VLANS.tenant_network_type configuration
variable works similarly, with a value of 'vlan' supporting tenant
networks with external connectivity.

With either plugin, administrators can create provider local networks
by specifying "--provider:network_type local". Additionally, with
openvswitch, provider GRE networks can now be created by specifying
"--provider:network_type gre --provider:segmentation_id <tunnel-id>".

A corresponding devstack patch is available at
https://review.openstack.org/#/c/12456/. With this patch, the
openvswitch and linuxbridge plugins are by default configured to
support only local networks. A set of shell variables, documented in
stack.sh, can be set in localrc to configure remote connectivity,
including bridges/interfaces available for provider networks.

Change-Id: I2812548326141d2212d04f34d5448fb974d298e0
2012-09-07 21:42:01 -04:00
..
agent add local network type and use by default for tenant networks 2012-09-07 21:42:01 -04:00
common add local network type and use by default for tenant networks 2012-09-07 21:42:01 -04:00
db add local network type and use by default for tenant networks 2012-09-07 21:42:01 -04:00
__init__.py blueprint quantum-linux-bridge-plugin 2012-02-08 13:32:55 -08:00
lb_quantum_plugin.py add local network type and use by default for tenant networks 2012-09-07 21:42:01 -04:00
README Implementation of second phase of provider extension. 2012-08-14 01:55:53 -04:00

# -- Background

The Quantum Linux Bridge plugin is a plugin that allows you to manage
connectivity between VMs on hosts that are capable of running a Linux Bridge.

The Quantum Linux Bridge plugin consists of three components:

1) The plugin itself: The plugin uses a database backend (mysql for
   now) to store configuration and mappings that are used by the
   agent.  The mysql server runs on a central server (often the same
   host as nova itself).

2) The quantum service host which will be running quantum.  This can
   be run on the server running nova.

3) An agent which runs on the host and communicates with the host operating
   system. The agent gathers the configuration and mappings from
   the mysql database running on the quantum host.

The sections below describe how to configure and run the quantum
service with the Linux Bridge plugin.

# -- Python library dependencies

   Make sure you have the following package(s) installedi on quantum server
   host as well as any hosts which run the agent:
   python-configobj
   bridge-utils
   python-mysqldb
   sqlite3

# -- Nova configuration (controller node)

1) Ensure that the quantum network manager is configured in the
   nova.conf on the node that will be running nova-network.

network_manager=nova.network.quantum.manager.QuantumManager

# -- Nova configuration (compute node(s))

1) Configure the vif driver, and libvirt/vif type

connection_type=libvirt
libvirt_type=qemu
libvirt_vif_type=ethernet
libvirt_vif_driver=nova.virt.libvirt.vif.QuantumLinuxBridgeVIFDriver
linuxnet_interface_driver=nova.network.linux_net.QuantumLinuxBridgeInterfaceDriver

2) If you want a DHCP server to be run for the VMs to acquire IPs,
   add the following flag to your nova.conf file:

quantum_use_dhcp=true

(Note: For more details on how to work with Quantum using Nova, i.e. how to create networks and such,
 please refer to the top level Quantum README which points to the relevant documentation.)

# -- Quantum configuration

Make the Linux Bridge plugin the current quantum plugin

- edit quantum.conf and change the core_plugin

core_plugin = quantum.plugins.linuxbridge.lb_quantum_plugin.LinuxBridgePluginV2

# -- Database config.

(Note: The plugin ships with a default SQLite in-memory database configuration,
 and can be used to run tests without performing the suggested DB config below.)

The Linux Bridge quantum plugin requires access to a mysql database in order
to store configuration and mappings that will be used by the agent.  Here is
how to set up the database on the host that you will be running the quantum
service on.

MySQL should be installed on the host, and all plugins and clients
must be configured with access to the database.

To prep mysql, run:

$ mysql -u root -p -e "create database quantum_linux_bridge"

# log in to mysql service
$ mysql -u root -p
# The Linux Bridge Quantum agent running on each compute node must be able to
# make a mysql connection back to the main database server.
mysql> GRANT USAGE ON *.* to root@'yourremotehost' IDENTIFIED BY 'newpassword';
# force update of authorization changes
mysql> FLUSH PRIVILEGES;

(Note: If the remote connection fails to MySQL, you might need to add the IP address,
 and/or fully-qualified hostname, and/or unqualified hostname in the above GRANT sql
 command. Also, you might need to specify "ALL" instead of "USAGE".)

# -- Plugin configuration

- Edit the configuration file:
  etc/quantum/plugins/linuxbridge/linuxbridge_conf.ini
  Make sure it matches your mysql configuration.  This file must be updated
  with the addresses and credentials to access the database.

  Note: debug and logging information should be updated in etc/quantum.conf

  Note: When running the tests, set the connection type to sqlite, and when
  actually running the server set it to mysql. At any given time, only one
  of these should be active in the conf file (you can comment out the other).

- On the quantum server, network_vlan_ranges must be configured in
  linuxbridge_conf.ini to specify the names of the physical networks
  managed by the linuxbridge plugin, along with the ranges of VLAN IDs
  available on each physical network for allocation to virtual
  networks. An entry of the form
  "<physical_network>:<vlan_min>:<vlan_max>" specifies a VLAN range on
  the named physical network. An entry of the form
  "<physical_network>" specifies a named network without making a
  range of VLANs available for allocation. Networks specified using
  either form are available for adminstrators to create provider flat
  networks and provider VLANs. Multiple VLAN ranges can be specified
  for the same physical network.

  The following example linuxbridge_conf.ini entry shows three
  physical networks that can be used to create provider networks, with
  ranges of VLANs available for allocation on two of them:

  [VLANS]
  network_vlan_ranges = physnet1:1000:2999,physnet1:3000:3999,physnet2,physnet3:1:4094


# -- Agent configuration

- Edit the configuration file:
  etc/quantum/plugins/linuxbridge/linuxbridge_conf.ini

- Copy quantum/plugins/linuxbridge/agent/linuxbridge_quantum_agent.py
  and etc/quantum/plugins/linuxbridge/linuxbridge_conf.ini
  to the compute node.

- Copy the quantum.conf file to the compute node

  Note: debug and logging information should be updated in etc/quantum.conf

- On each compute node, the network_interface_mappings must be
  configured in linuxbridge_conf.ini to map each physical network name
  to the physical interface connecting the node to that physical
  network. Entries are of the form
  "<physical_network>:<physical_interface>". For example, one compute
  node may use the following physical_inteface_mappings entries:

  [LINUX_BRIDGE]
  physical_interface_mappings = physnet1:eth1,physnet2:eth2,physnet3:eth3

  while another might use:

  [LINUX_BRIDGE]
  physical_interface_mappings = physnet1:em3,physnet2:em2,physnet3:em1


$ Run the following:
  python linuxbridge_quantum_agent.py --config-file quantum.conf 
                                      --config-file linuxbridge_conf.ini

  Note that the the user running the agent must have sudo priviliges
  to run various networking commands. Also, the agent can be
  configured to use quantum-rootwrap, limiting what commands it can
  run via sudo. See http://wiki.openstack.org/Packager/Rootwrap for
  details on rootwrap.

  As an alternative to coping the agent python file, if quantum is
  installed on the compute node, the agent can be run as
  bin/quantum-linuxbridge-agent.


# -- Running Tests

(Note: The plugin ships with a default SQLite in-memory database configuration,
 and can be used to run tests out of the box. Alternatively you can perform the
 DB configuration for a persistent database as mentioned in the Database
 Configuration section.)

- To run tests related to the Plugin and the VLAN management (run the
  following from the top level Quantum directory):
  PLUGIN_DIR=quantum/plugins/linuxbridge ./run_tests.sh -N

- The above will not however run the tests for the agent (which deals
  with creating the bridge and interfaces). To run the agent tests, run the
  following from the top level Quantum directory:
  sudo PLUGIN_DIR=quantum/plugins/linuxbridge ./run_tests.sh -N tests.unit._test_linuxbridgeAgent

  (Note: To run the agent tests you should have the environment setup as
   indicated in the Agent Configuration, and also have the necessary dependencies
   insalled.)