VIF driver and scheduler for UCS plugin are broken since the flag
configuration mechanism in nova is changed. Fixing that and also
fixing some property names, along changes to how the quantum client
code is invoked.

Change-Id: I757cc149f08673ce24d35ee0bfffae8e5b1a4afc
This commit is contained in:
Sumit Naiksatam 2012-03-15 16:43:19 -07:00
parent 10f0d6a537
commit b94b97cbfa
3 changed files with 499 additions and 489 deletions

View File

@ -1,447 +1,430 @@
========================================================================================= =========================================================================================
README: A Quantum Plugin Framework for Supporting L2 Networks Spannning Multiple Switches README: A Quantum Plugin Framework for Supporting L2 Networks Spannning Multiple Switches
========================================================================================= =========================================================================================
:Author: Sumit Naiksatam, Ram Durairaj, Mark Voelker, Edgar Magana, Shweta Padubidri, :Author: Sumit Naiksatam, Ram Durairaj, Mark Voelker, Edgar Magana, Shweta Padubidri,
Rohit Agarwalla, Ying Liu, Debo Dutta Rohit Agarwalla, Ying Liu, Debo Dutta
:Contact: netstack@lists.launchpad.net :Contact: netstack@lists.launchpad.net
:Web site: https://launchpad.net/~cisco-openstack :Web site: https://launchpad.net/~cisco-openstack
:Copyright: 2011 Cisco Systems, Inc. :Copyright: 2011 Cisco Systems, Inc.
.. contents:: .. contents::
Introduction Introduction
------------ ------------
This plugin implementation provides the following capabilities This plugin implementation provides the following capabilities
to help you take your Layer 2 network for a Quantum leap: to help you take your Layer 2 network for a Quantum leap:
* A reference implementation for a Quantum Plugin Framework * A reference implementation for a Quantum Plugin Framework
(For details see: http://wiki.openstack.org/quantum-multi-switch-plugin) (For details see: http://wiki.openstack.org/quantum-multi-switch-plugin)
* Supports multiple switches in the network * Supports multiple switches in the network
* Supports multiple models of switches concurrently * Supports multiple models of switches concurrently
* Supports use of multiple L2 technologies * Supports use of multiple L2 technologies
* Supports Cisco UCS blade servers with M81KR Virtual Interface Cards * Supports Cisco UCS blade servers with M81KR Virtual Interface Cards
(aka "Palo adapters") via 802.1Qbh. (aka "Palo adapters") via 802.1Qbh.
* Supports the Cisco Nexus family of switches. * Supports the Cisco Nexus family of switches.
It does not provide: It does not provide:
* A hologram of Al that only you can see. * A hologram of Al that only you can see.
* A map to help you find your way through time. * A map to help you find your way through time.
* A cure for amnesia or your swiss-cheesed brain. * A cure for amnesia or your swiss-cheesed brain.
Let's leap in! Let's leap in!
Pre-requisites Pre-requisites
-------------- --------------
(The following are necessary only when using the UCS and/or Nexus devices in your system. (The following are necessary only when using the UCS and/or Nexus devices in your system.
If you plan to just leverage the plugin framework, you do not need these.) If you plan to just leverage the plugin framework, you do not need these.)
* One or more UCS B200 series blade servers with M81KR VIC (aka * One or more UCS B200 series blade servers with M81KR VIC (aka
Palo adapters) installed. Palo adapters) installed.
* UCSM 2.0 (Capitola) Build 230 or above. * UCSM 2.0 (Capitola) Build 230 or above.
* OpenStack Diablo D3 or later (should have VIF-driver support) * OpenStack Diablo D3 or later (should have VIF-driver support)
* RHEL 6.1 (as of this writing, UCS only officially supports RHEL, but * OS supported:
it should be noted that Ubuntu support is planned in coming releases as well) ** RHEL 6.1 or above
** Package: python-configobj-4.6.0-3.el6.noarch (or newer) ** Ubuntu 11.10 or above
** Package: python-routes-1.12.3-2.el6.noarch (or newer) ** Package: python-configobj-4.6.0-3.el6.noarch (or newer)
** Package: python-routes-1.12.3-2.el6.noarch (or newer)
If you are using a Nexus switch in your topology, you'll need the following
NX-OS version and packages to enable Nexus support: If you are using a Nexus switch in your topology, you'll need the following
* NX-OS 5.2.1 (Delhi) Build 69 or above. NX-OS version and packages to enable Nexus support:
* paramiko library - SSHv2 protocol library for python * NX-OS 5.2.1 (Delhi) Build 69 or above.
** To install on RHEL 6.1, run: yum install python-paramiko * paramiko library - SSHv2 protocol library for python
* ncclient v0.3.1 - Python library for NETCONF clients * ncclient v0.3.1 - Python library for NETCONF clients
** You need a version of ncclient modifed by Cisco Systems. ** You need a version of ncclient modifed by Cisco Systems.
To get it, from your shell prompt do: To get it, from your shell prompt do:
git clone git@github.com:CiscoSystems/ncclient.git git clone git@github.com:CiscoSystems/ncclient.git
sudo python ./setup.py install sudo python ./setup.py install
** For more information of ncclient, see: ** For more information of ncclient, see:
http://schmizz.net/ncclient/ http://schmizz.net/ncclient/
To verify the version of any package you have installed on your system, Module Structure:
run "rpm -qav | grep <package name>", where <package name> is the -----------------
package you want to query (for example: python-routes). * quantum/plugins/cisco/ - Contains the L2-Network Plugin Framework
/client - CLI module for core and extensions API
Note that you can get access to recent versions of the packages above /common - Modules common to the entire plugin
and other OpenStack software packages by adding a new repository to /conf - All configuration files
your yum configuration. To do so, edit or create /db - Persistence framework
/etc/yum.repos.d/openstack.repo and add the following: /models - Class(es) which tie the logical abstractions
to the physical topology
[openstack-deps] /nova - Scheduler and VIF-driver to be used by Nova
name=OpenStack Nova Compute Dependencies /nexus - Nexus-specific modules
baseurl=http://yum.griddynamics.net/yum/cactus/deps /segmentation - Implementation of segmentation manager,
enabled=1 e.g. VLAN Manager
gpgcheck=1 /services - Set of orchestration libraries to insert
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-OPENSTACK In-path Networking Services
/tests - Tests specific to this plugin
Then run "yum install python-routes". /ucs - UCS-specific modules
Module Structure: Plugin Installation Instructions
----------------- ----------------------------------
* quantum/plugins/cisco/ - Contains the L2-Network Plugin Framework 1. Make a backup copy of quantum/etc/plugins.ini.
/client - CLI module for core and extensions API
/common - Modules common to the entire plugin 2. Edit quantum/etc/plugins.ini and edit the "provider" entry to point
/conf - All configuration files to the L2Network-plugin:
/db - Persistence framework
/models - Class(es) which tie the logical abstractions provider = quantum.plugins.cisco.l2network_plugin.L2Network
to the physical topology
/nova - Scheduler and VIF-driver to be used by Nova 3. Configure your OpenStack installation to use the 802.1qbh VIF driver and
/nexus - Nexus-specific modules Quantum-aware scheduler by editing the /etc/nova/nova.conf file with the
/segmentation - Implementation of segmentation manager, following entries:
e.g. VLAN Manager
/tests - Tests specific to this plugin --scheduler_driver=quantum.plugins.cisco.nova.quantum_port_aware_scheduler.QuantumPortAwareScheduler
/ucs - UCS-specific modules --quantum_host=127.0.0.1
--quantum_port=9696
--libvirt_vif_driver=quantum.plugins.cisco.nova.vifdirect.Libvirt802dot1QbhDriver
Plugin Installation Instructions --libvirt_vif_type=802.1Qbh
----------------------------------
1. Make a backup copy of quantum/quantum/plugins.ini. Note: To be able to bring up a VM on a UCS blade, you should first create a
port for that VM using the Quantum create port API. VM creation will
2. Edit quantum/quantum/plugins.ini and edit the "provider" entry to point fail if an unused port is not available. If you have configured your
to the L2Network-plugin: Nova project with more than one network, Nova will attempt to instantiate
the VM with one network interface (VIF) per configured network. To provide
provider = quantum.plugins.cisco.l2network_plugin.L2Network plugin points for each of these VIFs, you will need to create multiple
Quantum ports, one for each of the networks, prior to starting the VM.
3. Configure your OpenStack installation to use the 802.1qbh VIF driver and However, in this case you will need to use the Cisco multiport extension
Quantum-aware scheduler by editing the /etc/nova/nova.conf file with the API instead of the Quantum create port API. More details on using the
following entries: multiport extension follow in the section on multi NIC support.
--scheduler_driver=quantum.plugins.cisco.nova.quantum_port_aware_scheduler.QuantumPortAwareScheduler 4. To support the above configuration, you will need some Quantum modules. It's easiest
--quantum_host=127.0.0.1 to copy the entire quantum directory from your quantum installation into:
--quantum_port=9696
--libvirt_vif_driver=quantum.plugins.cisco.nova.vifdirect.Libvirt802dot1QbhDriver /usr/lib/python2.7/site-packages/
--libvirt_vif_type=802.1Qbh
This needs to be done for each nova compute node.
Note: To be able to bring up a VM on a UCS blade, you should first create a
port for that VM using the Quantum create port API. VM creation will 5. If you want to turn on support for Cisco Nexus switches:
fail if an unused port is not available. If you have configured your 5a. Uncomment the nexus_plugin property in
Nova project with more than one network, Nova will attempt to instantiate etc/quantum/plugins/cisco/cisco_plugins.ini to read:
the VM with one network interface (VIF) per configured network. To provide
plugin points for each of these VIFs, you will need to create multiple nexus_plugin=quantum.plugins.cisco.nexus.cisco_nexus_plugin.NexusPlugin
Quantum ports, one for each of the networks, prior to starting the VM.
However, in this case you will need to use the Cisco multiport extension 5b. Enter the relevant configuration in the
API instead of the Quantum create port API. More details on using the etc/quantum/plugins/cisco/nexus.ini file. Example:
multiport extension follow in the section on multi NIC support.
[SWITCH]
4. To support the above configuration, you will need some Quantum modules. It's easiest # Change the following to reflect the IP address of the Nexus switch.
to copy the entire quantum directory from your quantum installation into: # This will be the address at which Quantum sends and receives configuration
# information via SSHv2.
/usr/lib/python2.6/site-packages/ nexus_ip_address=10.0.0.1
# Port numbers on the Nexus switch to each one of the UCSM 6120s is connected
This needs to be done for each nova compute node. # Use shortened interface syntax, e.g. "1/10" not "Ethernet1/10".
nexus_first_port=1/10
5. If you want to turn on support for Cisco Nexus switches: nexus_second_port=1/11
5a. Uncomment the nexus_plugin property in #Port number where SSH will be running on the Nexus switch. Typically this is 22
quantum/plugins/cisco/conf/plugins.ini to read: #unless you've configured your switch otherwise.
nexus_ssh_port=22
nexus_plugin=quantum.plugins.cisco.nexus.cisco_nexus_plugin.NexusPlugin
[DRIVER]
5b. Enter the relevant configuration in the name=quantum.plugins.cisco.nexus.cisco_nexus_network_driver.CiscoNEXUSDriver
quantum/plugins/cisco/conf/nexus.ini file. Example:
5c. Make sure that SSH host key of the Nexus switch is known to the
[SWITCH] host on which you are running the Quantum service. You can do
# Change the following to reflect the IP address of the Nexus switch. this simply by logging in to your Quantum host as the user that
# This will be the address at which Quantum sends and receives configuration Quantum runs as and SSHing to the switch at least once. If the
# information via SSHv2. host key changes (e.g. due to replacement of the supervisor or
nexus_ip_address=10.0.0.1 clearing of the SSH config on the switch), you may need to repeat
# Port numbers on the Nexus switch to each one of the UCSM 6120s is connected this step and remove the old hostkey from ~/.ssh/known_hosts.
# Use shortened interface syntax, e.g. "1/10" not "Ethernet1/10".
nexus_first_port=1/10 6. Plugin Persistence framework setup:
nexus_second_port=1/11 6a. Create quantum_l2network database in mysql with the following command -
#Port number where SSH will be running on the Nexus switch. Typically this is 22
#unless you've configured your switch otherwise. mysql -u<mysqlusername> -p<mysqlpassword> -e "create database quantum_l2network"
nexus_ssh_port=22
6b. Enter the quantum_l2network database configuration info in the
[DRIVER] quantum/plugins/cisco/conf/db_conn.ini file.
name=quantum.plugins.cisco.nexus.cisco_nexus_network_driver.CiscoNEXUSDriver
6c. If there is a change in the plugin configuration, service would need
5c. Make sure that SSH host key of the Nexus switch is known to the to be restarted after dropping and re-creating the database using
host on which you are running the Quantum service. You can do the following commands -
this simply by logging in to your Quantum host as the user that
Quantum runs as and SSHing to the switch at least once. If the mysql -u<mysqlusername> -p<mysqlpassword> -e "drop database quantum_l2network"
host key changes (e.g. due to replacement of the supervisor or mysql -u<mysqlusername> -p<mysqlpassword> -e "create database quantum_l2network"
clearing of the SSH config on the switch), you may need to repeat
this step and remove the old hostkey from ~/.ssh/known_hosts. 7. Verify that you have the correct credentials for each IP address listed
in quantum/plugins/cisco/conf/credentials.ini. Example:
6. Plugin Persistence framework setup:
6a. Create quantum_l2network database in mysql with the following command - # Provide the UCSM credentials, create a separte entry for each UCSM used in your system
# UCSM IP address, username and password.
mysql -u<mysqlusername> -p<mysqlpassword> -e "create database quantum_l2network" [10.0.0.2]
username=admin
6b. Enter the quantum_l2network database configuration info in the password=mySecretPasswordForUCSM
quantum/plugins/cisco/conf/db_conn.ini file.
# Provide the Nexus credentials, if you are using Nexus switches.
6c. If there is a change in the plugin configuration, service would need # If not this will be ignored.
to be restarted after dropping and re-creating the database using [10.0.0.1]
the following commands - username=admin
password=mySecretPasswordForNexus
mysql -u<mysqlusername> -p<mysqlpassword> -e "drop database quantum_l2network"
mysql -u<mysqlusername> -p<mysqlpassword> -e "create database quantum_l2network" In general, make sure that every UCSM and Nexus switch used in your system,
has a credential entry in the above file. This is required for the system to
7. Verify that you have the correct credentials for each IP address listed be able to communicate with those switches.
in quantum/plugins/cisco/conf/credentials.ini. Example:
8. Configure the UCS systems' information in your deployment by editing the
# Provide the UCSM credentials, create a separte entry for each UCSM used in your system quantum/plugins/cisco/conf/ucs_inventory.ini file. You can configure multiple
# UCSM IP address, username and password. UCSMs per deployment, multiple chassis per UCSM, and multiple blades per
[10.0.0.2] chassis. Chassis ID and blade ID can be obtained from the UCSM (they will
username=admin typically be numbers like 1, 2, 3, etc.). Also make sure that you put the exact
password=mySecretPasswordForUCSM hostname as nova sees it (the host column in the services table of the nova
DB will give you that information).
# Provide the Nexus credentials, if you are using Nexus switches.
# If not this will be ignored. [ucsm-1]
[10.0.0.1] ip_address = <put_ucsm_ip_address_here>
username=admin [[chassis-1]]
password=mySecretPasswordForNexus chassis_id = <put_the_chassis_id_here>
[[[blade-1]]]
In general, make sure that every UCSM and Nexus switch used in your system, blade_id = <put_blade_id_here>
has a credential entry in the above file. This is required for the system to host_name = <put_hostname_here>
be able to communicate with those switches. [[[blade-2]]]
blade_id = <put_blade_id_here>
8. Configure the UCS systems' information in your deployment by editing the host_name = <put_hostname_here>
quantum/plugins/cisco/conf/ucs_inventory.ini file. You can configure multiple [[[blade-3]]]
UCSMs per deployment, multiple chassis per UCSM, and multiple blades per blade_id = <put_blade_id_here>
chassis. Chassis ID and blade ID can be obtained from the UCSM (they will host_name = <put_hostname_here>
typically be numbers like 1, 2, 3, etc.). Also make sure that you put the exact
hostname as nova sees it (the host column in the services table of the nova [ucsm-2]
DB will give you that information). ip_address = <put_ucsm_ip_address_here>
[[chassis-1]]
[ucsm-1] chassis_id = <put_the_chassis_id_here>
ip_address = <put_ucsm_ip_address_here> [[[blade-1]]]
[[chassis-1]] blade_id = <put_blade_id_here>
chassis_id = <put_the_chassis_id_here> host_name = <put_hostname_here>
[[[blade-1]]] [[[blade-2]]]
blade_id = <put_blade_id_here> blade_id = <put_blade_id_here>
host_name = <put_hostname_here> host_name = <put_hostname_here>
[[[blade-2]]]
blade_id = <put_blade_id_here>
host_name = <put_hostname_here> 9. Start the Quantum service. If something doesn't work, verify that
[[[blade-3]]] your configuration of each of the above files hasn't gone a little kaka.
blade_id = <put_blade_id_here> Once you've put right what once went wrong, leap on.
host_name = <put_hostname_here>
[ucsm-2] Multi NIC support for VMs
ip_address = <put_ucsm_ip_address_here> -------------------------
[[chassis-1]] As indicated earlier, if your Nova setup has a project with more than one network,
chassis_id = <put_the_chassis_id_here> Nova will try to create a virtual network interface (VIF) on the VM for each of those
[[[blade-1]]] networks. That implies -
blade_id = <put_blade_id_here>
host_name = <put_hostname_here> (1) You should create the same number of networks in Quantum as in your Nova
[[[blade-2]]] project.
blade_id = <put_blade_id_here>
host_name = <put_hostname_here> (2) Before each VM is instantiated, you should create Quantum ports on each of those
networks. These ports need to be created using the following rest call:
9. Start the Quantum service. If something doesn't work, verify that POST /1.0/extensions/csco/tenants/{tenant_id}/multiport/
your configuration of each of the above files hasn't gone a little kaka.
Once you've put right what once went wrong, leap on. with request body:
{'multiport':
Multi NIC support for VMs {'status': 'ACTIVE',
------------------------- 'net_id_list': net_id_list,
As indicated earlier, if your Nova setup has a project with more than one network, 'ports_desc': {'key': 'value'}}}
Nova will try to create a virtual network interface (VIF) on the VM for each of those
networks. That implies - where,
(1) You should create the same number of networks in Quantum as in your Nova net_id_list is a list of network IDs: [netid1, netid2, ...]. The "ports_desc" dictionary
project. is reserved for later use. For now, the same structure in terms of the dictionary name, key
and value should be used.
(2) Before each VM is instantiated, you should create Quantum ports on each of those
networks. These ports need to be created using the following rest call: The corresponding CLI for this operation is as follows:
POST /v1.0/extensions/csco/tenants/{tenant_id}/multiport/ PYTHONPATH=. python quantum/plugins/cisco/client/cli.py create_multiport <tenant_id> <net_id1,net_id2,...>
with request body: (Note that you should not be using the create port core API in the above case.)
{'multiport':
{'status': 'ACTIVE', Using the Command Line Client to work with this Plugin
'net_id_list': net_id_list, ------------------------------------------------------
'ports_desc': {'key': 'value'}}} A command line client is packaged with this plugin. This module can be used
to invoke the core API as well as the extensions API, so that you don't have
where, to switch between different CLI modules (it internally invokes the Quantum
CLI module for the core APIs to ensure consistency when using either). This
net_id_list is a list of network IDs: [netid1, netid2, ...]. The "ports_desc" dictionary command line client can be invoked as follows:
is reserved for later use. For now, the same structure in terms of the dictionary name, key
and value should be used. PYTHONPATH=.:tools python quantum/plugins/cisco/client/cli.py
The corresponding CLI for this operation is as follows: 1. Creating the network
PYTHONPATH=. python plugins/cisco-plugin/lib/quantum/plugins/cisco/client/cli.py create_multiport <tenant_id> <net_id1,net_id2,...> # PYTHONPATH=. python quantum/plugins/cisco/client/cli.py create_net -H 10.10.2.6 demo net1
Created a new Virtual Network with ID: c4a2bea7-a528-4caf-b16e-80397cd1663a
(Note that you should not be using the create port core API in the above case.) for Tenant demo
Using the Command Line Client to work with this Plugin 2. Listing the networks
------------------------------------------------------
A command line client is packaged with this plugin. This module can be used # PYTHONPATH=. python quantum/plugins/cisco/client/cli.py list_nets -H 10.10.2.6 demo
to invoke the core API as well as the extensions API, so that you don't have Virtual Networks for Tenant demo
to switch between different CLI modules (it internally invokes the Quantum Network ID: 0e85e924-6ef6-40c1-9f7a-3520ac6888b3
CLI module for the core APIs to ensure consistency when using either). This Network ID: c4a2bea7-a528-4caf-b16e-80397cd1663a
command line client can be invoked as follows:
PYTHONPATH=.:tools python plugins/cisco-plugin/lib/quantum/plugins/cisco/client/cli.py 3. Creating one port on each of the networks
1. Creating the network # PYTHONPATH=. python quantum/plugins/cisco/client/cli.py create_multiport -H 10.10.2.6 demo c4a2bea7-a528-4caf-b16e-80397cd1663a,0e85e924-6ef6-40c1-9f7a-3520ac6888b3
Created ports: {u'ports': [{u'id': u'118ac473-294d-480e-8f6d-425acbbe81ae'}, {u'id': u'996e84b8-2ed3-40cf-be75-de17ff1214c4'}]}
# PYTHONPATH=. python plugins/cisco-plugin/lib/quantum/plugins/cisco/client/cli.py create_net -H 10.10.2.6 demo net1
Created a new Virtual Network with ID: c4a2bea7-a528-4caf-b16e-80397cd1663a
for Tenant demo 4. List all the ports on a network
# PYTHONPATH=. python quantum/plugins/cisco/client/cli.py list_ports -H 10.10.2.6 demo c4a2bea7-a528-4caf-b16e-80397cd1663a
2. Listing the networks Ports on Virtual Network: c4a2bea7-a528-4caf-b16e-80397cd1663a
for Tenant: demo
# PYTHONPATH=. python plugins/cisco-plugin/lib/quantum/plugins/cisco/client/cli.py list_nets -H 10.10.2.6 demo Logical Port: 118ac473-294d-480e-8f6d-425acbbe81ae
Virtual Networks for Tenant demo
Network ID: 0e85e924-6ef6-40c1-9f7a-3520ac6888b3
Network ID: c4a2bea7-a528-4caf-b16e-80397cd1663a 5. Show the details of a port
# PYTHONPATH=. python quantum/plugins/cisco/client/cli.py show_port -H 10.10.2.6 demo c4a2bea7-a528-4caf-b16e-80397cd1663a 118ac473-294d-480e-8f6d-425acbbe81ae
3. Creating one port on each of the networks Logical Port ID: 118ac473-294d-480e-8f6d-425acbbe81ae
administrative State: ACTIVE
# PYTHONPATH=. python plugins/cisco-plugin/lib/quantum/plugins/cisco/client/cli.py create_multiport -H 10.10.2.6 demo c4a2bea7-a528-4caf-b16e-80397cd1663a,0e85e924-6ef6-40c1-9f7a-3520ac6888b3 interface: <none>
Created ports: {u'ports': [{u'id': u'118ac473-294d-480e-8f6d-425acbbe81ae'}, {u'id': u'996e84b8-2ed3-40cf-be75-de17ff1214c4'}]} on Virtual Network: c4a2bea7-a528-4caf-b16e-80397cd1663a
for Tenant: demo
4. List all the ports on a network
6. Start the VM instance using Nova
# PYTHONPATH=. python plugins/cisco-plugin/lib/quantum/plugins/cisco/client/cli.py list_ports -H 10.10.2.6 demo c4a2bea7-a528-4caf-b16e-80397cd1663a Note that when using UCS and the 802.1Qbh features, the association of the
Ports on Virtual Network: c4a2bea7-a528-4caf-b16e-80397cd1663a VIF-ID (also referred to as interface ID) on the VM's NIC with a port will
for Tenant: demo happen automatically when the VM is instantiated. At this point, doing a
Logical Port: 118ac473-294d-480e-8f6d-425acbbe81ae show_port will reveal the VIF-ID associated with the port. To indicate that
this VIF-ID is still detached from the network it would eventually be on, you
will see the suffix "(detached)" on the VIF-ID. This indicates that although
5. Show the details of a port the VIF-ID and the port have been associated, the VIF still does not have
connectivity to the network on which the port resides. That connectivity
# PYTHONPATH=. python plugins/cisco-plugin/lib/quantum/plugins/cisco/client/cli.py show_port -H 10.10.2.6 demo c4a2bea7-a528-4caf-b16e-80397cd1663a 118ac473-294d-480e-8f6d-425acbbe81ae will be established only after the plug/attach operation is performed (as
Logical Port ID: 118ac473-294d-480e-8f6d-425acbbe81ae described in the next step).
administrative State: ACTIVE
interface: <none> # PYTHONPATH=. python quantum/plugins/cisco/client/cli.py show_port demo c4a2bea7-a528-4caf-b16e-80397cd1663a 118ac473-294d-480e-8f6d-425acbbe81ae
on Virtual Network: c4a2bea7-a528-4caf-b16e-80397cd1663a Logical Port ID: 118ac473-294d-480e-8f6d-425acbbe81ae
for Tenant: demo administrative State: ACTIVE
interface: b73e3585-d074-4379-8dde-931c0fc4db0e(detached)
on Virtual Network: c4a2bea7-a528-4caf-b16e-80397cd1663a
6. Start the VM instance using Nova for Tenant: demo
Note that when using UCS and the 802.1Qbh features, the association of the
VIF-ID (also referred to as interface ID) on the VM's NIC with a port will
happen automatically when the VM is instantiated. At this point, doing a 7. Plug interface and port into the network
show_port will reveal the VIF-ID associated with the port. To indicate that Use the interface information obtained in step 6 to plug the interface into
this VIF-ID is still detached from the network it would eventually be on, you the network.
will see the suffix "(detached)" on the VIF-ID. This indicates that although
the VIF-ID and the port have been associated, the VIF still does not have # PYTHONPATH=. python quantum/plugins/cisco/client/cli.py plug_iface demo c4a2bea7-a528-4caf-b16e-80397cd1663a 118ac473-294d-480e-8f6d-425acbbe81ae b73e3585-d074-4379-8dde-931c0fc4db0e
connectivity to the network on which the port resides. That connectivity Plugged interface b73e3585-d074-4379-8dde-931c0fc4db0e
will be established only after the plug/attach operation is performed (as into Logical Port: 118ac473-294d-480e-8f6d-425acbbe81ae
described in the next step). on Virtual Network: c4a2bea7-a528-4caf-b16e-80397cd1663a
for Tenant: demo
# PYTHONPATH=. python plugins/cisco-plugin/lib/quantum/plugins/cisco/client/cli.py show_port demo c4a2bea7-a528-4caf-b16e-80397cd1663a 118ac473-294d-480e-8f6d-425acbbe81ae
Logical Port ID: 118ac473-294d-480e-8f6d-425acbbe81ae
administrative State: ACTIVE 8. Unplug an interface and port from the network
interface: b73e3585-d074-4379-8dde-931c0fc4db0e(detached)
on Virtual Network: c4a2bea7-a528-4caf-b16e-80397cd1663a # PYTHONPATH=. python quantum/plugins/cisco/client/cli.py unplug_iface demo c4a2bea7-a528-4caf-b16e-80397cd1663a 118ac473-294d-480e-8f6d-425acbbe81ae
for Tenant: demo Unplugged interface from Logical Port: 118ac473-294d-480e-8f6d-425acbbe81ae
on Virtual Network: c4a2bea7-a528-4caf-b16e-80397cd1663a
for Tenant: demo
7. Plug interface and port into the network
Use the interface information obtained in step 6 to plug the interface into Note: After unplugging, if you check the details of the port, you will
the network. see the VIF-IF associated with the port (but now suffixed with the state
"detached"). At this point, it is possible to plug the VIF into the network
# PYTHONPATH=. python plugins/cisco-plugin/lib/quantum/plugins/cisco/client/cli.py plug_iface demo c4a2bea7-a528-4caf-b16e-80397cd1663a 118ac473-294d-480e-8f6d-425acbbe81ae b73e3585-d074-4379-8dde-931c0fc4db0e again making use of the same VIF-ID. In general, once associated, the VIF-ID
Plugged interface b73e3585-d074-4379-8dde-931c0fc4db0e cannot be disassociated with the port until the VM is terminated. After the
into Logical Port: 118ac473-294d-480e-8f6d-425acbbe81ae VM is terminated, the VIF-ID will be automatically disassociated from the
on Virtual Network: c4a2bea7-a528-4caf-b16e-80397cd1663a port. To summarize, association and disassociation of the VIF-ID with a port
for Tenant: demo happens automatically at the time of creating and terminating the VM. The
connectivity of the VIF to the network is controlled by the user via the
plug and unplug operations.
8. Unplug an interface and port from the network
# PYTHONPATH=. python plugins/cisco-plugin/lib/quantum/plugins/cisco/client/cli.py unplug_iface demo c4a2bea7-a528-4caf-b16e-80397cd1663a 118ac473-294d-480e-8f6d-425acbbe81ae How to test the installation
Unplugged interface from Logical Port: 118ac473-294d-480e-8f6d-425acbbe81ae ----------------------------
on Virtual Network: c4a2bea7-a528-4caf-b16e-80397cd1663a The unit tests are located at quantum/plugins/cisco/tests/unit. They can be
for Tenant: demo executed from the main folder using the run_tests.sh or to get a more detailed
result the run_tests.py script.
Note: After unplugging, if you check the details of the port, you will
see the VIF-IF associated with the port (but now suffixed with the state 1. All unit tests (needs environment setup as indicated in the pre-requisites):
"detached"). At this point, it is possible to plug the VIF into the network
again making use of the same VIF-ID. In general, once associated, the VIF-ID ./run_tests.sh -N quantum.plugins.cisco.tests.unit
cannot be disassociated with the port until the VM is terminated. After the
VM is terminated, the VIF-ID will be automatically disassociated from the or by modifying the environment variable to point to the plugin directory
port. To summarize, association and disassociation of the VIF-ID with a port
happens automatically at the time of creating and terminating the VM. The In bash : export PLUGIN_DIR=quantum/plugins/cisco
connectivity of the VIF to the network is controlled by the user via the tcsh/csh : setenv PLUGIN_DIR quantum/plugins/cisco
plug and unplug operations.
./run_tests.sh -N
How to test the installation Another option is to execute the python script run_tests.py
----------------------------
The unit tests are located at quantum/plugins/cisco/tests/unit. They can be python run_tests.py quantum.plugins.cisco.tests.unit
executed from the main folder using the run_tests.sh or to get a more detailed
result the run_tests.py script. 2. Testing the core API (without UCS/Nexus/RHEL hardware, and can be run on
Ubuntu):
1. All unit tests (needs environment setup as indicated in the pre-requisites): Device-specific plugins can be disabled by commenting out the entries in:
etc/quantum/plugins/cisco/cisco_plugins.ini
./run_tests.sh -N quantum.plugins.cisco.tests.unit The Core API can be tested by initially disabling all device plugins, then
enabling just the UCS plugins, and finally enabling both the UCS and the
or by modifying the environment variable to point to the plugin directory Nexus plugins.
Execute the test script as follows:
In bash : export PLUGIN_DIR=quantum/plugins/cisco
tcsh/csh : setenv PLUGIN_DIR quantum/plugins/cisco ./run_tests.sh -N quantum.plugins.cisco.tests.unit.test_l2networkApi
./run_tests.sh -N or
Another option is to execute the python script run_tests.py python run_tests.py quantum.plugins.cisco.tests.unit.test_l2networkApi
python run_tests.py quantum.plugins.cisco.tests.unit 3. Specific Plugin unit test (needs environment setup as indicated in the
pre-requisites):
2. Testing the core API (without UCS/Nexus/RHEL hardware, and can be run on
Ubuntu): ./run_tests.sh -N quantum.plugins.cisco.tests.unit.<name_of_the_module>
Device-specific plugins can be disabled by commenting out the entries in:
etc/quantum/plugins/cisco/cisco_plugins.ini or
The Core API can be tested by initially disabling all device plugins, then
enabling just the UCS plugins, and finally enabling both the UCS and the python run_tests.py quantum.plugins.cisco.tests.unit.<name_of_the_module>
Nexus plugins. E.g.:
Execute the test script as follows:
python run_tests.py quantum.plugins.cisco.tests.unit.test_ucs_plugin
./run_tests.sh -N quantum.plugins.cisco.tests.unit.test_l2networkApi
To run specific tests, use the following:
or python run_tests.py
quantum.plugins.cisco.tests.unit.<name_of_the_module>:<ClassName>.<funcName>
python run_tests.py quantum.plugins.cisco.tests.unit.test_l2networkApi
Eg:
3. Specific Plugin unit test (needs environment setup as indicated in the python run_tests.py
pre-requisites): quantum.plugins.cisco.tests.unit.test_ucs_plugin:UCSVICTestPlugin.test_create_port
./run_tests.sh -N quantum.plugins.cisco.tests.unit.<name_of_the_module> 4. Testing the Extension API
The script is placed alongwith the other cisco unit tests. The location may
or change later.
Location quantum/plugins/cisco/tests/unit/test_cisco_extension.py
python run_tests.py quantum.plugins.cisco.tests.unit.<name_of_the_module>
E.g.: The script can be executed by :
./run_tests.sh -N quantum.plugins.cisco.tests.unit.test_cisco_extension
python run_tests.py quantum.plugins.cisco.tests.unit.test_ucs_plugin
or
To run specific tests, use the following:
python run_tests.py python run_tests.py quantum.plugins.cisco.tests.unit.test_cisco_extension
quantum.plugins.cisco.tests.unit.<name_of_the_module>:<ClassName>.<funcName>
Eg: Bingo bango bongo! That's it! Thanks for taking the leap into Quantum.
python run_tests.py
quantum.plugins.cisco.tests.unit.test_ucs_plugin:UCSVICTestPlugin.test_create_port ...Oh, boy!
4. Testing the Extension API
The script is placed alongwith the other cisco unit tests. The location may
change later.
Location quantum/plugins/cisco/tests/unit/test_cisco_extension.py
The script can be executed by :
./run_tests.sh -N quantum.plugins.cisco.tests.unit.test_cisco_extension
or
python run_tests.py quantum.plugins.cisco.tests.unit.test_cisco_extension
Bingo bango bongo! That's it! Thanks for taking the leap into Quantum.
...Oh, boy!

View File

@ -17,33 +17,48 @@
# @author: Sumit Naiksatam, Cisco Systems, Inc. # @author: Sumit Naiksatam, Cisco Systems, Inc.
# #
"""
Quantum Port Aware Scheduler Implementation
"""
from nova import exception as excp from nova import exception as excp
from nova import flags from nova import flags
from nova import log as logging from nova import log as logging
from nova.openstack.common import cfg
from nova.scheduler import driver from nova.scheduler import driver
from nova.scheduler import chance
from quantum.client import Client from quantum.client import Client
from quantum.common.wsgi import Serializer
LOG = logging.getLogger('quantum.plugins.cisco.nova.quantum_aware_scheduler')
LOG = logging.getLogger(__name__)
quantum_opts = [
cfg.StrOpt('quantum_connection_host',
default='127.0.0.1',
help='HOST for connecting to quantum'),
cfg.StrOpt('quantum_connection_port',
default='9696',
help='PORT for connecting to quantum'),
cfg.StrOpt('quantum_default_tenant_id',
default="default",
help='Default tenant id when creating quantum networks'),
]
FLAGS = flags.FLAGS FLAGS = flags.FLAGS
flags.DEFINE_string('quantum_host', "127.0.0.1", FLAGS.register_opts(quantum_opts)
'IP address of the quantum network service.')
flags.DEFINE_integer('quantum_port', 9696,
'Listening port for Quantum network service')
HOST = FLAGS.quantum_host HOST = FLAGS.quantum_connection_host
PORT = FLAGS.quantum_port PORT = FLAGS.quantum_connection_port
USE_SSL = False USE_SSL = False
ACTION_PREFIX_EXT = '/v1.0' VERSION = '1.0'
ACTION_PREFIX_CSCO = ACTION_PREFIX_EXT + \ URI_PREFIX_CSCO = '/extensions/csco/tenants/{tenant_id}'
'/extensions/csco/tenants/{tenant_id}'
TENANT_ID = 'nova' TENANT_ID = 'nova'
CSCO_EXT_NAME = 'Cisco Nova Tenant' CSCO_EXT_NAME = 'Cisco Nova Tenant'
ACTION = '/schedule_host' ACTION = '/schedule_host'
class QuantumPortAwareScheduler(driver.Scheduler): class QuantumPortAwareScheduler(chance.ChanceScheduler):
""" """
Quantum network service dependent scheduler Quantum network service dependent scheduler
Obtains the hostname from Quantum using an extension API Obtains the hostname from Quantum using an extension API
@ -52,8 +67,9 @@ class QuantumPortAwareScheduler(driver.Scheduler):
# We have to send a dummy tenant name here since the client # We have to send a dummy tenant name here since the client
# needs some tenant name, but the tenant name will not be used # needs some tenant name, but the tenant name will not be used
# since the extensions URL does not require it # since the extensions URL does not require it
client = Client(HOST, PORT, USE_SSL, format='json', LOG.debug("Initializing Cisco Quantum Port-aware Scheduler...")
action_prefix=ACTION_PREFIX_EXT, tenant="dummy") client = Client(HOST, PORT, USE_SSL, format='json', version=VERSION,
uri_prefix="", tenant="dummy", logger=LOG)
request_url = "/extensions" request_url = "/extensions"
data = client.do_request('GET', request_url) data = client.do_request('GET', request_url)
LOG.debug("Obtained supported extensions from Quantum: %s" % data) LOG.debug("Obtained supported extensions from Quantum: %s" % data)
@ -63,17 +79,19 @@ class QuantumPortAwareScheduler(driver.Scheduler):
LOG.debug("Quantum plugin supports required \"%s\" extension" LOG.debug("Quantum plugin supports required \"%s\" extension"
"for the scheduler." % name) "for the scheduler." % name)
return return
LOG.error("Quantum plugin does not support required \"%s\" extension" LOG.error("Quantum plugin does not support required \"%s\" extension"
" for the scheduler. Scheduler will quit." % CSCO_EXT_NAME) " for the scheduler. Scheduler will quit." % CSCO_EXT_NAME)
raise excp.ServiceUnavailable() raise excp.ServiceUnavailable()
def schedule(self, context, topic, *args, **kwargs): def _schedule(self, context, topic, request_spec, **kwargs):
"""Gets the host name from the Quantum service""" """Gets the host name from the Quantum service"""
instance_id = kwargs['instance_id'] LOG.debug("Cisco Quantum Port-aware Scheduler is scheduling...")
instance_id = request_spec['instance_properties']['uuid']
user_id = \ user_id = \
kwargs['request_spec']['instance_properties']['user_id'] request_spec['instance_properties']['user_id']
project_id = \ project_id = \
kwargs['request_spec']['instance_properties']['project_id'] request_spec['instance_properties']['project_id']
instance_data_dict = \ instance_data_dict = \
{'novatenant': \ {'novatenant': \
@ -82,14 +100,15 @@ class QuantumPortAwareScheduler(driver.Scheduler):
{'user_id': user_id, {'user_id': user_id,
'project_id': project_id}}} 'project_id': project_id}}}
client = Client(HOST, PORT, USE_SSL, format='json', tenant=TENANT_ID, client = Client(HOST, PORT, USE_SSL, format='json', version=VERSION,
action_prefix=ACTION_PREFIX_CSCO) uri_prefix=URI_PREFIX_CSCO, tenant=TENANT_ID,
logger=LOG)
request_url = "/novatenants/" + project_id + ACTION request_url = "/novatenants/" + project_id + ACTION
data = client.do_request('PUT', request_url, body=instance_data_dict) data = client.do_request('PUT', request_url, body=instance_data_dict)
hostname = data["host_list"]["host_1"] hostname = data["host_list"]["host_1"]
if not hostname: if not hostname:
raise driver.NoValidHost(_("Scheduler was unable to locate a host" raise excp.NoValidHost(_("Scheduler was unable to locate a host"
" for this request. Is the appropriate" " for this request. Is the appropriate"
" service running?")) " service running?"))

View File

@ -18,31 +18,37 @@
"""VIF drivers for interface type direct.""" """VIF drivers for interface type direct."""
from nova import exception as excp from nova import exception as excp
from nova import flags from nova import flags
from nova import log as logging from nova import log as logging
from nova.network import linux_net from nova.openstack.common import cfg
from nova.virt.libvirt import netutils
from nova import utils
from nova.virt.vif import VIFDriver from nova.virt.vif import VIFDriver
from quantum.client import Client from quantum.client import Client
from quantum.common.wsgi import Serializer
LOG = logging.getLogger('quantum.plugins.cisco.nova.vifdirect')
LOG = logging.getLogger(__name__)
quantum_opts = [
cfg.StrOpt('quantum_connection_host',
default='127.0.0.1',
help='HOST for connecting to quantum'),
cfg.StrOpt('quantum_connection_port',
default='9696',
help='PORT for connecting to quantum'),
cfg.StrOpt('quantum_default_tenant_id',
default="default",
help='Default tenant id when creating quantum networks'),
]
FLAGS = flags.FLAGS FLAGS = flags.FLAGS
flags.DEFINE_string('quantum_host', "127.0.0.1", FLAGS.register_opts(quantum_opts)
'IP address of the quantum network service.')
flags.DEFINE_integer('quantum_port', 9696,
'Listening port for Quantum network service')
HOST = FLAGS.quantum_host HOST = FLAGS.quantum_connection_host
PORT = FLAGS.quantum_port PORT = FLAGS.quantum_connection_port
USE_SSL = False USE_SSL = False
TENANT_ID = 'nova' VERSION = '1.0'
ACTION_PREFIX_EXT = '/v1.0' URI_PREFIX_CSCO = '/extensions/csco/tenants/{tenant_id}'
ACTION_PREFIX_CSCO = ACTION_PREFIX_EXT + \
'/extensions/csco/tenants/{tenant_id}'
TENANT_ID = 'nova' TENANT_ID = 'nova'
CSCO_EXT_NAME = 'Cisco Nova Tenant' CSCO_EXT_NAME = 'Cisco Nova Tenant'
ASSOCIATE_ACTION = '/associate_port' ASSOCIATE_ACTION = '/associate_port'
@ -55,8 +61,9 @@ class Libvirt802dot1QbhDriver(VIFDriver):
# We have to send a dummy tenant name here since the client # We have to send a dummy tenant name here since the client
# needs some tenant name, but the tenant name will not be used # needs some tenant name, but the tenant name will not be used
# since the extensions URL does not require it # since the extensions URL does not require it
client = Client(HOST, PORT, USE_SSL, format='json', LOG.debug("Initializing Cisco Quantum VIF driver...")
action_prefix=ACTION_PREFIX_EXT, tenant="dummy") client = Client(HOST, PORT, USE_SSL, format='json', version=VERSION,
uri_prefix="", tenant="dummy", logger=LOG)
request_url = "/extensions" request_url = "/extensions"
data = client.do_request('GET', request_url) data = client.do_request('GET', request_url)
LOG.debug("Obtained supported extensions from Quantum: %s" % data) LOG.debug("Obtained supported extensions from Quantum: %s" % data)
@ -73,8 +80,8 @@ class Libvirt802dot1QbhDriver(VIFDriver):
def _update_configurations(self, instance, network, mapping, action): def _update_configurations(self, instance, network, mapping, action):
"""Gets the device name and the profile name from Quantum""" """Gets the device name and the profile name from Quantum"""
LOG.debug("Cisco Quantum VIF driver performing: %s" % (action))
instance_id = instance['id'] instance_id = instance['uuid']
user_id = instance['user_id'] user_id = instance['user_id']
project_id = instance['project_id'] project_id = instance['project_id']
vif_id = mapping['vif_uuid'] vif_id = mapping['vif_uuid']
@ -87,8 +94,9 @@ class Libvirt802dot1QbhDriver(VIFDriver):
'project_id': project_id, 'project_id': project_id,
'vif_id': vif_id}}} 'vif_id': vif_id}}}
client = Client(HOST, PORT, USE_SSL, format='json', tenant=TENANT_ID, client = Client(HOST, PORT, USE_SSL, format='json', version=VERSION,
action_prefix=ACTION_PREFIX_CSCO) uri_prefix=URI_PREFIX_CSCO, tenant=TENANT_ID,
logger=LOG)
request_url = "/novatenants/" + project_id + action request_url = "/novatenants/" + project_id + action
data = client.do_request('PUT', request_url, body=instance_data_dict) data = client.do_request('PUT', request_url, body=instance_data_dict)