Cross Pod L3 Networking - Part2

Implement cross pod l3 networking functionality. In this second
patch, we implement an extra job to configure extra routes. README
is updated to introduce this networking solution and how to test
it with DevStack.

Change-Id: I3dafd9ef15c211a941e85b690be1992416d3f3eb
This commit is contained in:
zhiyuan_cai 2016-01-19 15:13:21 +08:00
parent 81b45f2c1d
commit 4d89a0faa7
13 changed files with 745 additions and 30 deletions

193
README.md
View File

@ -38,7 +38,7 @@ License: Apache 2.0
Now stateless design can be played with DevStack.
- 1 Git clone DevStack.
- 2 Git clone Tricircle, or just download devstack/local.conf.sample
- 2 Git clone Tricircle, or just download devstack/local.conf.sample.
- 3 Copy devstack/local.conf.sample to DevStack folder and rename it to
local.conf, change password in the file if needed.
- 4 Run DevStack.
@ -64,7 +64,6 @@ as following:
value is "RegionOne", we use it as the region for top OpenStack(Tricircle);
"Pod1" is the region set via "POD_REGION_NAME", new configuration option
introduced by Tricircle, we use it as the bottom OpenStack.
- 6 Create pod instances for Tricircle and bottom OpenStack
```
curl -X POST http://127.0.0.1:19999/v1.0/pods -H "Content-Type: application/json" \
@ -72,7 +71,6 @@ curl -X POST http://127.0.0.1:19999/v1.0/pods -H "Content-Type: application/json
curl -X POST http://127.0.0.1:19999/v1.0/pods -H "Content-Type: application/json" \
-H "X-Auth-Token: $token" -d '{"pod": {"pod_name": "Pod1", "az_name": "az1"}}'
```
Pay attention to "pod_name" parameter we specify when creating pod. Pod name
should exactly match the region name registered in Keystone since it is used
@ -109,3 +107,192 @@ cinder --debug show $volume_id
cinder --debug delete $volume_id
cinder --debug list
```
## Cross-pod L3 networking with DevStack
Now stateless design supports cross-pod l3 networking.
### Introduction
To achieve cross-pod l3 networking, Tricircle utilizes a shared provider VLAN
network at first phase. We are considering later using DCI controller to create
a multi-segment VLAN network, VxLAN network for L3 networking purpose. When a
subnet is attached to a router in top pod, Tricircle not only creates corresponding
subnet and router in bottom pod, but also creates a VLAN type "bridge" network.
Both tenant network and "bridge" network are attached to bottom router. Each
tenant will have one allocated VLAN, which is shared by the tenant's "bridge"
networks across bottom pods. The CIDRs of "bridge" networks for one tenant are
also the same, so the router interfaces in "bridge" networks across different
bottom pods can communicate with each other via the provider VLAN network. By
adding an extra route as following:
```
destination: CIDR of tenant network in another bottom pod
nexthop: "bridge" network interface ip in another bottom pod
```
when a server sends a packet whose receiver is in another network and in
another bottom pod, the packet first goes to router namespace, then is
forwarded to the router namespace in another bottom pod according to the extra
route, at last the packet is sent to the target server. This configuration job
is triggered when user attaches a subnet to a router in top pod and finished
asynchronously.
Currently cross-pod L2 networking is not supported yet, so tenant networks
cannot cross pods, that is to say, one network in top pod can only locate in
one bottom pod, tenant network is bound to bottom pod. Otherwise we cannot
correctly configure extra route since for one destination CIDR, we have more
than one possible nexthop addresses.
> When cross-pod L2 networking is introduced, L2GW will be used to connect L2
> network in different pods. No extra route is required to connect L2 network.
> All L3 traffic will be forwarded to the local L2 network, then go to the
> server in another pod via the L2GW.
We use "availability_zone_hints" attribute for user to specify the bottom pod
he wants to create the bottom network. Currently we do not support attaching
a network to a router without setting "availability_zone_hints" attribute of
the network.
### Prerequisite
To play cross-pod L3 networking, two nodes are needed. One to run Tricircle
and one bottom pod, the other one to run another bottom pod. Both nodes have
two network interfaces, for management and provider VLAN network. For VLAN
network, the physical network infrastructure should support VLAN tagging.
### Setup
In node1,
- 1 Git clone DevStack.
- 2 Git clone Tricircle, or just download devstack/local.conf.node_1.sample.
- 3 Copy devstack/local.conf.node_1.sample to DevStack folder and rename it to
local.conf, change password in the file if needed.
- 4 Change the following options according to your environment:
```
HOST_IP=10.250.201.24 - change to your management interface ip
TENANT_VLAN_RANGE=2001:3000 - change to VLAN range your physical network supports
PHYSICAL_NETWORK=bridge - change to whatever you like
OVS_PHYSICAL_BRIDGE=br-bridge - change to whatever you like
```
Tricircle doesn't support security group currently so we use these two options
to disable security group functionality.
```
Q_USE_SECGROUP=False
LIBVIRT_FIREWALL_DRIVER=nova.virt.firewall.NoopFirewallDriver
```
- 5 Create OVS bridge and attach the VLAN network interface to it
```
sudo ovs-vsctl add-br br-bridge
sudo ovs-vsctl add-port br-bridge eth1
```
br-bridge is the OVS bridge name you configure on OVS_PHYSICAL_BRIDGE, eth1 is
the device name of your VLAN network interface
- 6 Run DevStack.
- 7 After DevStack successfully starts, begin to setup node2.
In node2,
- 1 Git clone DevStack.
- 2 Git clone Tricircle, or just download devstack/local.conf.node_2.sample.
- 3 Copy devstack/local.conf.node_2.sample to DevStack folder and rename it to
local.conf, change password in the file if needed.
- 4 Change the following options according to your environment:
```
HOST_IP=10.250.201.25 - change to your management interface ip
KEYSTONE_SERVICE_HOST=10.250.201.24 - change to management interface ip of node1
KEYSTONE_AUTH_HOST=10.250.201.24 - change to management interface ip of node1
GLANCE_SERVICE_HOST=10.250.201.24 - change to management interface ip of node1
TENANT_VLAN_RANGE=2001:3000 - change to VLAN range your physical network supports
PHYSICAL_NETWORK=bridge - change to whatever you like
OVS_PHYSICAL_BRIDGE=br-bridge - change to whatever you like
```
- 5 Create OVS bridge and attach the VLAN network interface to it
```
sudo ovs-vsctl add-br br-bridge
sudo ovs-vsctl add-port br-bridge eth1
```
br-bridge is the OVS bridge name you configure on OVS_PHYSICAL_BRIDGE, eth1 is
the device name of your VLAN network interface
- 6 Run DevStack.
- 7 After DevStack successfully starts, the setup is finished.
### How to play
All the following operations are performed in node1
- 1 Check if services have been correctly registered. Run "openstack endpoint
list" and you should get similar output as following:
```
+----------------------------------+-----------+--------------+----------------+
| ID | Region | Service Name | Service Type |
+----------------------------------+-----------+--------------+----------------+
| 1fadbddef9074f81b986131569c3741e | RegionOne | tricircle | Cascading |
| a5c5c37613244cbab96230d9051af1a5 | RegionOne | ec2 | ec2 |
| 809a3f7282f94c8e86f051e15988e6f5 | Pod2 | neutron | network |
| e6ad9acc51074f1290fc9d128d236bca | Pod1 | neutron | network |
| aee8a185fa6944b6860415a438c42c32 | RegionOne | keystone | identity |
| 280ebc45bf9842b4b4156eb5f8f9eaa4 | RegionOne | glance | image |
| aa54df57d7b942a1a327ed0722dba96e | Pod2 | nova_legacy | compute_legacy |
| aa25ae2a3f5a4e4d8bc0cae2f5fbb603 | Pod2 | nova | compute |
| 932550311ae84539987bfe9eb874dea3 | RegionOne | nova_legacy | compute_legacy |
| f89fbeffd7e446d0a552e2a6cf7be2ec | Pod1 | nova | compute |
| e2e19c164060456f8a1e75f8d3331f47 | Pod2 | ec2 | ec2 |
| de698ad5c6794edd91e69f0e57113e97 | RegionOne | nova | compute |
| 8a4b2332d2a4460ca3f740875236a967 | Pod2 | keystone | identity |
| b3ad80035f8742f29d12df67bdc2f70c | RegionOne | neutron | network |
+----------------------------------+-----------+--------------+----------------+
```
"RegionOne" is the region you set in local.conf via REGION_NAME in node1, whose
default value is "RegionOne", we use it as the region for Tricircle; "Pod1" is
the region set via POD_REGION_NAME, new configuration option introduced by
Tricircle, we use it as the bottom OpenStack; "Pod2" is the region you set via
REGION_NAME in node2, we use it as another bottom OpenStack.
- 2 Create pod instances for Tricircle and bottom OpenStack
```
curl -X POST http://127.0.0.1:19999/v1.0/pods -H "Content-Type: application/json" \
-H "X-Auth-Token: $token" -d '{"pod": {"pod_name": "RegionOne"}}'
curl -X POST http://127.0.0.1:19999/v1.0/pods -H "Content-Type: application/json" \
-H "X-Auth-Token: $token" -d '{"pod": {"pod_name": "Pod1", "az_name": "az1"}}'
curl -X POST http://127.0.0.1:19999/v1.0/pods -H "Content-Type: application/json" \
-H "X-Auth-Token: $token" -d '{"pod": {"pod_name": "Pod2", "az_name": "az2"}}'
```
- 3 Create network with AZ scheduler hints specified
```
curl -X POST http://127.0.0.1:9696/v2.0/networks -H "Content-Type: application/json" \
-H "X-Auth-Token: b5dc59ebfdb74dbfa2a6351682d10a6e" \
-d '{"network": {"name": "net1", "admin_state_up": true, "availability_zone_hints": ["az1"]}}'
curl -X POST http://127.0.0.1:9696/v2.0/networks -H "Content-Type: application/json" \
-H "X-Auth-Token: b5dc59ebfdb74dbfa2a6351682d10a6e" \
-d '{"network": {"name": "net2", "admin_state_up": true, "availability_zone_hints": ["az2"]}}'
```
Here we create two networks separately bound to Pod1 and Pod2
- 4 Create necessary resources to boot virtual machines.
```
nova flavor-create test 1 1024 10 1
neutron subnet-create net1 10.0.1.0/24
neutron subnet-create net2 10.0.2.0/24
glance image-list
```
- 5 Boot virtual machines.
```
nova boot --flavor 1 --image $image_id --nic net-id=$net1_id --availability-zone az1 vm1
nova boot --flavor 1 --image $image_id --nic net-id=$net2_id --availability-zone az2 vm2
```
- 6 Create router and attach interface
```
neutron router-create router
neutron router-interface-add router $subnet1_id
neutron router-interface-add router $subnet2_id
```
- 7 Launch VNC console anc check connectivity
By now, two networks are connected by the router, the two virtual machines
should be able to communicate with each other, we can launch a VNC console to
check. Currently Tricircle doesn't support VNC proxy, we need to go to bottom
OpenStack to obtain a VNC console.
```
nova --os-region-name Pod1 get-vnc-console vm1 novnc
nova --os-region-name Pod2 get-vnc-console vm2 novnc
```
Login one virtual machine via VNC and you should find it can "ping" the other
virtual machine. Security group functionality is disabled in bottom OpenStack
so no need to configure security group rule.

View File

@ -0,0 +1,70 @@
#
# Sample DevStack local.conf.
#
# This sample file is intended to be used for your typical Tricircle DevStack
# multi-node environment. As this file configures, DevStack will setup two
# regions, one top region running Tricircle services, Keystone, Glance, Nova
# API gateway, Cinder API gateway and Neutron with Tricircle plugin; and one
# bottom region running original Nova, Cinder and Neutron.
#
# This file works with local.conf.node_2.sample to help you build a two-node
# three-region Tricircle environment. Keystone and Glance in top region are
# shared by services in all the regions.
#
# Some options needs to be change to adapt to your environment, see README.md
# for detail.
#
[[local|localrc]]
DATABASE_PASSWORD=password
RABBIT_PASSWORD=password
SERVICE_PASSWORD=password
SERVICE_TOKEN=password
ADMIN_PASSWORD=password
LOGFILE=/opt/stack/logs/stack.sh.log
VERBOSE=True
LOG_COLOR=True
SCREEN_LOGDIR=/opt/stack/logs
FIXED_RANGE=10.0.0.0/24
NETWORK_GATEWAY=10.0.0.1
FIXED_NETWORK_SIZE=256
FLOATING_RANGE=10.100.100.160/24
Q_FLOATING_ALLOCATION_POOL=start=10.100.100.160,end=10.100.100.192
PUBLIC_NETWORK_GATEWAY=10.100.100.3
Q_USE_SECGROUP=False
LIBVIRT_FIREWALL_DRIVER=nova.virt.firewall.NoopFirewallDriver
NEUTRON_CREATE_INITIAL_NETWORKS=False
HOST_IP=10.250.201.24
TENANT_VLAN_RANGE=2001:3000
OVS_PHYSICAL_BRIDGE=br-bridge
PHYSICAL_NETWORK=bridge
Q_ENABLE_TRICIRCLE=True
enable_plugin tricircle https://github.com/openstack/tricircle/
# Tricircle Services
enable_service t-api
enable_service t-ngw
enable_service t-cgw
enable_service t-job
# Use Neutron instead of nova-network
disable_service n-net
enable_service q-svc
enable_service q-svc1
enable_service q-dhcp
enable_service q-agt
enable_service q-l3
enable_service c-api
enable_service c-vol
enable_service c-sch
disable_service n-obj
disable_service c-bak
disable_service tempest
disable_service horizon

View File

@ -0,0 +1,66 @@
#
# Sample DevStack local.conf.
#
# This sample file is intended to be used for your typical Tricircle DevStack
# multi-node environment. As this file configures, DevStack will setup one
# bottom region running original Nova, Cinder and Neutron.
#
# This file works with local.conf.node_1.sample to help you build a two-node
# three-region Tricircle environment. Keystone and Glance in top region are
# shared by services in all the regions.
#
# Some options needs to be change to adapt to your environment, see README.md
# for detail.
#
[[local|localrc]]
DATABASE_PASSWORD=password
RABBIT_PASSWORD=password
SERVICE_PASSWORD=password
SERVICE_TOKEN=password
ADMIN_PASSWORD=password
LOGFILE=/opt/stack/logs/stack.sh.log
VERBOSE=True
LOG_COLOR=True
SCREEN_LOGDIR=/opt/stack/logs
FIXED_RANGE=10.0.0.0/24
NETWORK_GATEWAY=10.0.0.1
FIXED_NETWORK_SIZE=256
FLOATING_RANGE=10.100.100.160/24
Q_FLOATING_ALLOCATION_POOL=start=10.100.100.160,end=10.100.100.192
PUBLIC_NETWORK_GATEWAY=10.100.100.3
Q_USE_SECGROUP=False
LIBVIRT_FIREWALL_DRIVER=nova.virt.firewall.NoopFirewallDriver
NEUTRON_CREATE_INITIAL_NETWORKS=False
HOST_IP=10.250.201.25
REGION_NAME=Pod2
SERVICE_HOST=$HOST_IP
KEYSTONE_SERVICE_HOST=10.250.201.24
KEYSTONE_AUTH_HOST=10.250.201.24
GLANCE_SERVICE_HOST=10.250.201.24
TENANT_VLAN_RANGE=2001:3000
OVS_PHYSICAL_BRIDGE=br-bridge
PHYSICAL_NETWORK=bridge
# Use Neutron instead of nova-network
disable_service n-net
enable_service q-svc
enable_service q-dhcp
enable_service q-agt
enable_service q-l3
enable_service c-api
enable_service c-vol
enable_service c-sch
disable_service n-obj
disable_service g-api
disable_service g-reg
disable_service c-bak
disable_service tempest
disable_service horizon

View File

@ -1,8 +1,8 @@
#
# Sample DevStack local.conf.
#
# This sample file is intended to be used for your typical Cascade DevStack Top
# environment that's running all of OpenStack on a single host. This can also
# This sample file is intended to be used for your typical Tricircle DevStack
# environment that's running all of OpenStack on a single host.
#
# No changes to this sample configuration are required for this to work.
#
@ -27,8 +27,8 @@ Q_FLOATING_ALLOCATION_POOL=start=10.100.100.160,end=10.100.100.192
PUBLIC_NETWORK_GATEWAY=10.100.100.3
TENANT_VLAN_RANGE=2001:3000
PHYSICAL_NETWORK=bridge
Q_USE_SECGROUP=False
LIBVIRT_FIREWALL_DRIVER=nova.virt.firewall.NoopFirewallDriver
Q_ENABLE_TRICIRCLE=True
enable_plugin tricircle https://github.com/openstack/tricircle/

View File

@ -276,7 +276,6 @@ if [[ "$Q_ENABLE_TRICIRCLE" == "True" ]]; then
iniset $NEUTRON_CONF client auto_refresh_endpoint True
iniset $NEUTRON_CONF client top_pod_name $REGION_NAME
iniset $NEUTRON_CONF tricircle bridge_segmentation_id `echo $TENANT_VLAN_RANGE | awk -F: '{print $2}'`
iniset $NEUTRON_CONF tricircle bridge_physical_network $PHYSICAL_NETWORK
fi

View File

@ -357,6 +357,34 @@ class Client(object):
handle = self.service_handle_map[service]
return handle.handle_create(cxt, resource, *args, **kwargs)
@_safe_operation('update')
def update_resources(self, resource, cxt, *args, **kwargs):
"""Update resource in pod of top layer
Directly invoke this method to update resources, or use
update_(resource)s (self, cxt, *args, **kwargs). These methods are
automatically generated according to the supported resources of each
ResourceHandle class.
:param resource: resource type
:param cxt: context object
:param args, kwargs: passed according to resource type
--------------------------
resource -> args -> kwargs
--------------------------
router -> body -> none
--------------------------
:return: a dict containing resource information
:raises: EndpointNotAvailable
"""
if cxt.is_admin and not cxt.auth_token:
cxt.auth_token = self._get_admin_token()
cxt.tenant = self._get_admin_project_id()
service = self.resource_service_map[resource]
handle = self.service_handle_map[service]
return handle.handle_update(cxt, resource, *args, **kwargs)
@_safe_operation('delete')
def delete_resources(self, resource, cxt, resource_id):
"""Delete resource in pod of top layer

View File

@ -47,9 +47,9 @@ client_opts = [
cfg.CONF.register_opts(client_opts, group='client')
LIST, CREATE, DELETE, GET, ACTION = 1, 2, 4, 8, 16
operation_index_map = {'list': LIST, 'create': CREATE,
'delete': DELETE, 'get': GET, 'action': ACTION}
LIST, CREATE, DELETE, GET, ACTION, UPDATE = 1, 2, 4, 8, 16, 32
operation_index_map = {'list': LIST, 'create': CREATE, 'delete': DELETE,
'get': GET, 'action': ACTION, 'update': UPDATE}
LOG = logging.getLogger(__name__)
@ -119,7 +119,7 @@ class NeutronResourceHandle(ResourceHandle):
support_resource = {'network': LIST | CREATE | DELETE | GET,
'subnet': LIST | CREATE | DELETE | GET,
'port': LIST | CREATE | DELETE | GET,
'router': LIST | CREATE | ACTION,
'router': LIST | CREATE | ACTION | UPDATE,
'security_group': LIST,
'security_group_rule': LIST}
@ -152,6 +152,16 @@ class NeutronResourceHandle(ResourceHandle):
raise exceptions.EndpointNotAvailable(
'neutron', client.httpclient.endpoint_url)
def handle_update(self, cxt, resource, *args, **kwargs):
try:
client = self._get_client(cxt)
return getattr(client, 'update_%s' % resource)(
*args, **kwargs)[resource]
except q_exceptions.ConnectionFailed:
self.endpoint_url = None
raise exceptions.EndpointNotAvailable(
'neutron', client.httpclient.endpoint_url)
def handle_get(self, cxt, resource, resource_id):
try:
client = self._get_client(cxt)

View File

@ -70,5 +70,12 @@ class XJobAPI(object):
return version_cap
def test_rpc(self, ctxt, payload):
return self.client.call(ctxt, 'test_rpc', payload=payload)
def configure_extra_routes(self, ctxt, router_id):
# NOTE(zhiyuan) this RPC is called by plugin in Neutron server, whose
# control exchange is "neutron", however, we starts xjob without
# specifying its control exchange, so the default value "openstack" is
# used, thus we need to pass exchange as "openstack" here.
self.client.prepare(exchange='openstack').cast(
ctxt, 'configure_extra_routes', payload={'router': router_id})

View File

@ -16,6 +16,7 @@
from oslo_config import cfg
import oslo_log.helpers as log_helpers
from oslo_log import log
from oslo_utils import uuidutils
from neutron.api.v2 import attributes
from neutron.common import exceptions
@ -32,6 +33,8 @@ from neutron.db import portbindings_db
from neutron.db import securitygroups_db
from neutron.db import sqlalchemyutils
from neutron.extensions import availability_zone as az_ext
from neutron.plugins.ml2.drivers import type_vlan
import neutron.plugins.ml2.models as ml2_models
from sqlalchemy import sql
@ -42,18 +45,13 @@ import tricircle.common.context as t_context
from tricircle.common.i18n import _
from tricircle.common.i18n import _LI
import tricircle.common.lock_handle as t_lock
from tricircle.common import xrpcapi
import tricircle.db.api as db_api
from tricircle.db import core
from tricircle.db import models
tricircle_opts = [
# TODO(zhiyuan) change to segmentation range
# currently all tenants share one VLAN id for bridge networks, should
# allocate one isolated segmentation id for each tenant later
cfg.IntOpt('bridge_segmentation_id',
default=0,
help='vlan id of l3 bridge network'),
cfg.StrOpt('bridge_physical_network',
default='',
help='name of l3 bridge physical network')
@ -65,6 +63,15 @@ cfg.CONF.register_opts(tricircle_opts, group=tricircle_opt_group)
LOG = log.getLogger(__name__)
class TricircleVlanTypeDriver(type_vlan.VlanTypeDriver):
def __init__(self):
super(TricircleVlanTypeDriver, self).__init__()
# dump method
def get_mtu(self, physical_network):
return 0
class TricirclePlugin(db_base_plugin_v2.NeutronDbPluginV2,
securitygroups_db.SecurityGroupDbMixin,
external_net_db.External_net_db_mixin,
@ -93,7 +100,11 @@ class TricirclePlugin(db_base_plugin_v2.NeutronDbPluginV2,
super(TricirclePlugin, self).__init__()
LOG.info(_LI("Starting Tricircle Neutron Plugin"))
self.clients = {}
self.xjob_handler = xrpcapi.XJobAPI()
self._setup_rpc()
# use VlanTypeDriver to allocate VLAN for bridge network
self.vlan_driver = TricircleVlanTypeDriver()
self.vlan_driver.initialize()
def _setup_rpc(self):
self.endpoints = []
@ -359,12 +370,15 @@ class TricirclePlugin(db_base_plugin_v2.NeutronDbPluginV2,
return port_list
@staticmethod
def _get_map_filter_ids(key, value, top_bottom_map):
def _get_map_filter_ids(key, value, pod_id, top_bottom_map):
if key in ('id', 'network_id', 'device_id'):
id_list = []
for _id in value:
key = '%s_%s' % (pod_id, _id)
if _id in top_bottom_map:
id_list.append(top_bottom_map[_id])
elif key in top_bottom_map:
id_list.append(top_bottom_map[key])
else:
id_list.append(_id)
return id_list
@ -384,7 +398,8 @@ class TricirclePlugin(db_base_plugin_v2.NeutronDbPluginV2,
if filters:
_filters = dict(filters)
for key, value in _filters:
id_list = self._get_map_filter_ids(key, value, top_bottom_map)
id_list = self._get_map_filter_ids(
key, value, current_pod['pod_id'], top_bottom_map)
if id_list:
_filters[key] = id_list
params.update(_filters)
@ -436,7 +451,14 @@ class TricirclePlugin(db_base_plugin_v2.NeutronDbPluginV2,
for route in routes:
if route['bottom_id']:
bottom_top_map[route['bottom_id']] = route['top_id']
top_bottom_map[route['top_id']] = route['bottom_id']
if route['resource_type'] == t_constants.RT_PORT:
key = route['top_id']
else:
# for non port resource, one top resource is
# possible to be mapped to more than one bottom
# resource
key = '%s_%s' % (route['pod_id'], route['top_id'])
top_bottom_map[key] = route['bottom_id']
if limit:
if marker:
@ -481,8 +503,8 @@ class TricirclePlugin(db_base_plugin_v2.NeutronDbPluginV2,
_filters = []
if filters:
for key, value in filters.iteritems():
id_list = self._get_map_filter_ids(key, value,
top_bottom_map)
id_list = self._get_map_filter_ids(
key, value, pod['pod_id'], top_bottom_map)
if id_list:
_filters.append({'key': key,
'comparator': 'eq',
@ -583,6 +605,26 @@ class TricirclePlugin(db_base_plugin_v2.NeutronDbPluginV2,
'admin_state_up': True}}
_, net_id = self._prepare_top_element(
t_ctx, q_ctx, project_id, pod, bridge_net_ele, 'network', net_body)
# allocate a VLAN id for bridge network
phy_net = cfg.CONF.tricircle.bridge_physical_network
with q_ctx.session.begin():
query = q_ctx.session.query(ml2_models.NetworkSegment)
query = query.filter_by(network_id=net_id)
if not query.first():
segment = self.vlan_driver.reserve_provider_segment(
q_ctx.session, {'physical_network': phy_net})
record = ml2_models.NetworkSegment(
id=uuidutils.generate_uuid(),
network_id=net_id,
network_type='vlan',
physical_network=phy_net,
segmentation_id=segment['segmentation_id'],
segment_index=0,
is_dynamic=False
)
q_ctx.session.add(record)
subnet_body = {
'subnet': {
'network_id': net_id,
@ -675,7 +717,11 @@ class TricirclePlugin(db_base_plugin_v2.NeutronDbPluginV2,
t_ctx = t_context.get_context_from_neutron_context(q_ctx)
phy_net = cfg.CONF.tricircle.bridge_physical_network
vlan = cfg.CONF.tricircle.bridge_segmentation_id
with q_ctx.session.begin():
query = q_ctx.session.query(ml2_models.NetworkSegment)
query = query.filter_by(network_id=t_net['id'])
vlan = query.first().segmentation_id
net_body = {'network': {'tenant_id': project_id,
'name': t_net['id'],
'provider:network_type': 'vlan',
@ -847,4 +893,9 @@ class TricirclePlugin(db_base_plugin_v2.NeutronDbPluginV2,
context, router_id, interface_info)
raise
# TODO(zhiyuan) improve reliability
# this is a casting rpc, so no guarantee that this operation will
# success, find out a way to improve reliability, like introducing
# job mechanism for async operations
self.xjob_handler.configure_extra_routes(t_ctx, router_id)
return return_info

View File

@ -28,6 +28,7 @@ from neutron.extensions import availability_zone as az_ext
from neutron.ipam import subnet_alloc
import neutronclient.common.exceptions as q_exceptions
from oslo_config import cfg
from oslo_utils import uuidutils
from tricircle.common import constants
@ -46,6 +47,8 @@ TOP_ROUTERPORT = []
TOP_SUBNETPOOLS = []
TOP_SUBNETPOOLPREFIXES = []
TOP_IPALLOCATIONS = []
TOP_VLANALLOCATIONS = []
TOP_SEGMENTS = []
BOTTOM1_NETS = []
BOTTOM1_SUBNETS = []
BOTTOM1_PORTS = []
@ -56,6 +59,7 @@ BOTTOM2_PORTS = []
BOTTOM2_ROUTERS = []
RES_LIST = [TOP_NETS, TOP_SUBNETS, TOP_PORTS, TOP_ROUTERS, TOP_ROUTERPORT,
TOP_SUBNETPOOLS, TOP_SUBNETPOOLPREFIXES, TOP_IPALLOCATIONS,
TOP_VLANALLOCATIONS, TOP_SEGMENTS,
BOTTOM1_NETS, BOTTOM1_SUBNETS, BOTTOM1_PORTS, BOTTOM1_ROUTERS,
BOTTOM2_NETS, BOTTOM2_SUBNETS, BOTTOM2_PORTS, BOTTOM2_ROUTERS]
RES_MAP = {'networks': TOP_NETS,
@ -65,7 +69,9 @@ RES_MAP = {'networks': TOP_NETS,
'routerports': TOP_ROUTERPORT,
'ipallocations': TOP_IPALLOCATIONS,
'subnetpools': TOP_SUBNETPOOLS,
'subnetpoolprefixes': TOP_SUBNETPOOLPREFIXES}
'subnetpoolprefixes': TOP_SUBNETPOOLPREFIXES,
'ml2_vlan_allocations': TOP_VLANALLOCATIONS,
'ml2_network_segments': TOP_SEGMENTS}
class DotDict(dict):
@ -103,10 +109,15 @@ class FakeNeutronClient(object):
for i, port in enumerate(sorted_list):
if port['id'] == params['marker']:
return {'ports': sorted_list[i + 1:]}
if 'filters' in params and params['filters'].get('id'):
if 'filters' in params:
return_list = []
for port in port_list:
if port['id'] in params['filters']['id']:
is_selected = True
for key, value in params['filters'].iteritems():
if key not in port or port[key] not in value:
is_selected = False
break
if is_selected:
return_list.append(port)
return {'ports': return_list}
return {'ports': port_list}
@ -339,7 +350,16 @@ class FakeQuery(object):
return self.records[0]
def first(self):
return self.one()
if len(self.records) == 0:
return None
else:
return self.records[0]
def update(self, values):
for record in self.records:
for key, value in values.iteritems():
record[key] = value
return len(self.records)
def all(self):
return self.records
@ -422,9 +442,23 @@ class FakeSession(object):
pass
class FakeRPCAPI(object):
def configure_extra_routes(self, context, router_id):
pass
class FakePlugin(plugin.TricirclePlugin):
def __init__(self):
self.set_ipam_backend()
self.xjob_handler = FakeRPCAPI()
self.vlan_driver = plugin.TricircleVlanTypeDriver()
phynet = 'bridge'
cfg.CONF.set_override('bridge_physical_network', phynet,
group='tricircle')
TOP_VLANALLOCATIONS.append(
DotDict({'physical_network': phynet,
'vlan_id': 2000, 'allocated': False}))
def _get_client(self, pod_name):
return FakeClient(pod_name)
@ -585,6 +619,33 @@ class PluginTest(unittest.TestCase):
self.assertEqual([{'id': 'top_id_1', 'name': 'bottom'}], ports2)
self.assertEqual([], ports3)
TOP_ROUTERS.append({'id': 'router_id'})
b_routers_list = [BOTTOM1_ROUTERS, BOTTOM2_ROUTERS]
b_ports_list = [BOTTOM1_PORTS, BOTTOM2_PORTS]
for i in xrange(1, 3):
router_id = 'router_%d_id' % i
b_routers_list[i - 1].append({'id': router_id})
route = {
'top_id': 'router_id',
'pod_id': 'pod_id_%d' % i,
'bottom_id': router_id,
'resource_type': 'router'}
with self.context.session.begin():
core.create_resource(self.context,
models.ResourceRouting, route)
# find port and add device_id
for port in b_ports_list[i - 1]:
port_id = 'bottom_id_%d' % i
if port['id'] == port_id:
port['device_id'] = router_id
ports = fake_plugin.get_ports(neutron_context,
filters={'device_id': ['router_id']})
expected = [{'id': 'top_id_1', 'name': 'bottom',
'device_id': 'router_id'},
{'id': 'top_id_2', 'name': 'bottom',
'device_id': 'router_id'}]
self.assertItemsEqual(expected, ports)
@patch.object(context, 'get_context_from_neutron_context')
@patch.object(db_base_plugin_v2.NeutronDbPluginV2, 'delete_port')
@patch.object(FakeClient, 'delete_ports')
@ -762,9 +823,10 @@ class PluginTest(unittest.TestCase):
'_make_subnet_dict', new=fake_make_subnet_dict)
@patch.object(subnet_alloc.SubnetAllocator, '_lock_subnetpool',
new=mock.Mock)
@patch.object(FakeRPCAPI, 'configure_extra_routes')
@patch.object(FakeClient, 'action_routers')
@patch.object(context, 'get_context_from_neutron_context')
def test_add_interface(self, mock_context, mock_action):
def test_add_interface(self, mock_context, mock_action, mock_rpc):
self._basic_pod_route_setup()
fake_plugin = FakePlugin()
@ -792,6 +854,14 @@ class PluginTest(unittest.TestCase):
self.assertEqual(b_net_id, map_net_id)
self.assertEqual(b_subnet_id, map_subnet_id)
mock_rpc.assert_called_once_with(t_ctx, t_router_id)
for b_net in BOTTOM1_NETS:
if 'provider:segmentation_id' in b_net:
self.assertEqual(TOP_VLANALLOCATIONS[0]['vlan_id'],
b_net['provider:segmentation_id'])
self.assertEqual(True, TOP_VLANALLOCATIONS[0]['allocated'])
self.assertEqual(TOP_VLANALLOCATIONS[0]['vlan_id'],
TOP_SEGMENTS[0]['segmentation_id'])
bridge_port_name = constants.bridge_port_name % (tenant_id,
b_router_id)
@ -853,6 +923,7 @@ class PluginTest(unittest.TestCase):
'_make_subnet_dict', new=fake_make_subnet_dict)
@patch.object(subnet_alloc.SubnetAllocator, '_lock_subnetpool',
new=mock.Mock)
@patch.object(FakeRPCAPI, 'configure_extra_routes', new=mock.Mock)
@patch.object(FakeClient, 'action_routers')
@patch.object(context, 'get_context_from_neutron_context')
def test_add_interface_exception(self, mock_context, mock_action):
@ -912,6 +983,7 @@ class PluginTest(unittest.TestCase):
'_make_subnet_dict', new=fake_make_subnet_dict)
@patch.object(subnet_alloc.SubnetAllocator, '_lock_subnetpool',
new=mock.Mock)
@patch.object(FakeRPCAPI, 'configure_extra_routes', new=mock.Mock)
@patch.object(FakeClient, 'delete_ports')
@patch.object(FakeClient, 'action_routers')
@patch.object(context, 'get_context_from_neutron_context')

View File

View File

@ -0,0 +1,171 @@
# Copyright 2015 Huawei Technologies Co., Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import mock
from mock import patch
import unittest
from tricircle.common import context
import tricircle.db.api as db_api
from tricircle.db import core
from tricircle.db import models
from tricircle.xjob import xmanager
BOTTOM1_NETWORK = []
BOTTOM2_NETWORK = []
BOTTOM1_SUBNET = []
BOTTOM2_SUBNET = []
BOTTOM1_PORT = []
BOTTOM2_PORT = []
BOTTOM1_ROUTER = []
BOTTOM2_ROUTER = []
RES_LIST = [BOTTOM1_SUBNET, BOTTOM2_SUBNET, BOTTOM1_PORT, BOTTOM2_PORT]
RES_MAP = {'pod_1': {'network': BOTTOM1_NETWORK,
'subnet': BOTTOM1_SUBNET,
'port': BOTTOM1_PORT,
'router': BOTTOM1_ROUTER},
'pod_2': {'network': BOTTOM2_NETWORK,
'subnet': BOTTOM2_SUBNET,
'port': BOTTOM2_PORT,
'router': BOTTOM2_ROUTER}}
class FakeXManager(xmanager.XManager):
def __init__(self):
self.clients = {'pod_1': FakeClient('pod_1'),
'pod_2': FakeClient('pod_2')}
def _get_client(self, pod_name=None):
return self.clients[pod_name]
class FakeClient(object):
def __init__(self, pod_name=None):
if pod_name:
self.pod_name = pod_name
else:
self.pod_name = 'top'
def list_resources(self, resource, cxt, filters=None):
res_list = []
filters = filters or []
for res in RES_MAP[self.pod_name][resource]:
is_selected = True
for _filter in filters:
if _filter['key'] not in res:
is_selected = False
break
if res[_filter['key']] != _filter['value']:
is_selected = False
break
if is_selected:
res_list.append(res)
return res_list
def list_ports(self, cxt, filters=None):
return self.list_resources('port', cxt, filters)
def get_subnets(self, cxt, subnet_id):
return self.list_resources(
'subnet', cxt,
[{'key': 'id', 'comparator': 'eq', 'value': subnet_id}])[0]
def update_routers(self, cxt, *args, **kwargs):
pass
class XManagerTest(unittest.TestCase):
def setUp(self):
core.initialize()
core.ModelBase.metadata.create_all(core.get_engine())
# enforce foreign key constraint for sqlite
core.get_engine().execute('pragma foreign_keys=on')
self.context = context.Context()
self.xmanager = FakeXManager()
@patch.object(FakeClient, 'update_routers')
def test_configure_extra_routes(self, mock_update):
top_router_id = 'router_id'
for i in xrange(1, 3):
pod_dict = {'pod_id': 'pod_id_%d' % i,
'pod_name': 'pod_%d' % i,
'az_name': 'az_name_%d' % i}
db_api.create_pod(self.context, pod_dict)
network = {'id': 'network_%d_id' % i}
bridge_network = {'id': 'bridge_network_%d_id' % i}
router = {'id': 'router_%d_id' % i}
subnet = {
'id': 'subnet_%d_id' % i,
'network_id': network['id'],
'cidr': '10.0.%d.0/24' % i,
'gateway_ip': '10.0.%d.1' % i,
}
bridge_subnet = {
'id': 'bridge_subnet_%d_id' % i,
'network_id': bridge_network['id'],
'cidr': '100.0.1.0/24',
'gateway_ip': '100.0.1.%d' % i,
}
port = {
'network_id': network['id'],
'device_id': router['id'],
'fixed_ips': [{'subnet_id': subnet['id'],
'ip_address': subnet['gateway_ip']}]
}
bridge_port = {
'network_id': bridge_network['id'],
'device_id': router['id'],
'fixed_ips': [{'subnet_id': bridge_subnet['id'],
'ip_address': bridge_subnet['gateway_ip']}]
}
pod_name = 'pod_%d' % i
RES_MAP[pod_name]['network'].append(network)
RES_MAP[pod_name]['network'].append(bridge_network)
RES_MAP[pod_name]['subnet'].append(subnet)
RES_MAP[pod_name]['subnet'].append(bridge_subnet)
RES_MAP[pod_name]['port'].append(port)
RES_MAP[pod_name]['port'].append(bridge_port)
RES_MAP[pod_name]['router'].append(router)
route = {'top_id': top_router_id, 'bottom_id': router['id'],
'pod_id': pod_dict['pod_id'], 'resource_type': 'router'}
with self.context.session.begin():
core.create_resource(self.context, models.ResourceRouting,
route)
BOTTOM1_NETWORK.append({'id': 'network_3_id'})
BOTTOM1_SUBNET.append({'id': 'subnet_3_id',
'network_id': 'network_3_id',
'cidr': '10.0.3.0/24',
'gateway_ip': '10.0.3.1'})
BOTTOM1_PORT.append({'network_id': 'network_3_id',
'device_id': 'router_1_id',
'fixed_ips': [{'subnet_id': 'subnet_3_id',
'ip_address': '10.0.3.1'}]})
self.xmanager.configure_extra_routes(self.context,
{'router': top_router_id})
calls = [mock.call(self.context, 'router_1_id',
{'router': {
'routes': [{'nexthop': '100.0.1.2',
'destination': '10.0.2.0/24'}]}}),
mock.call(self.context, 'router_2_id',
{'router': {
'routes': [{'nexthop': '100.0.1.1',
'destination': '10.0.1.0/24'},
{'nexthop': '100.0.1.1',
'destination': '10.0.3.0/24'}]}})]
mock_update.assert_has_calls(calls)

View File

@ -13,13 +13,18 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import netaddr
from oslo_config import cfg
from oslo_log import log as logging
import oslo_messaging as messaging
from oslo_service import periodic_task
from tricircle.common import client
from tricircle.common import constants
from tricircle.common.i18n import _
from tricircle.common.i18n import _LI
import tricircle.db.api as db_api
CONF = cfg.CONF
@ -45,8 +50,16 @@ class XManager(PeriodicTasks):
self.service_name = service_name
# self.notifier = rpc.get_notifier(self.service_name, self.host)
self.additional_endpoints = []
self.clients = {'top': client.Client()}
super(XManager, self).__init__()
def _get_client(self, pod_name=None):
if not pod_name:
return self.clients['top']
if pod_name not in self.clients:
self.clients[pod_name] = client.Client(pod_name)
return self.clients[pod_name]
def periodic_tasks(self, context, raise_on_error=False):
"""Tasks to be run at a periodic interval."""
return self.run_periodic_tasks(context, raise_on_error=raise_on_error)
@ -114,3 +127,44 @@ class XManager(PeriodicTasks):
info_text = "xmanager receive payload: %s" % payload
return info_text
def configure_extra_routes(self, ctx, payload):
# TODO(zhiyuan) performance and reliability issue
# better have a job tracking mechanism
t_router_id = payload['router']
b_pods, b_router_ids = zip(*db_api.get_bottom_mappings_by_top_id(
ctx, t_router_id, constants.RT_ROUTER))
router_bridge_ip_map = {}
router_cidr_map = {}
for i, b_pod in enumerate(b_pods):
bottom_client = self._get_client(pod_name=b_pod['pod_name'])
b_inferfaces = bottom_client.list_ports(
ctx, filters=[{'key': 'device_id',
'comparator': 'eq',
'value': b_router_ids[i]}])
cidrs = []
for b_inferface in b_inferfaces:
ip = b_inferface['fixed_ips'][0]['ip_address']
bridge_cidr = '100.0.0.0/8'
if netaddr.IPAddress(ip) in netaddr.IPNetwork(bridge_cidr):
router_bridge_ip_map[b_router_ids[i]] = ip
continue
b_subnet = bottom_client.get_subnets(
ctx, b_inferface['fixed_ips'][0]['subnet_id'])
cidrs.append(b_subnet['cidr'])
router_cidr_map[b_router_ids[i]] = cidrs
for i, b_router_id in enumerate(b_router_ids):
bottom_client = self._get_client(pod_name=b_pods[i]['pod_name'])
extra_routes = []
for router_id, cidrs in router_cidr_map.iteritems():
if router_id == b_router_id:
continue
for cidr in cidrs:
extra_routes.append(
{'nexthop': router_bridge_ip_map[router_id],
'destination': cidr})
bottom_client.update_routers(ctx, b_router_id,
{'router': {'routes': extra_routes}})