L3 North-South Networking
Implement l3 north-south networking functionality. In our current design, external network is hosted in one of the bottom pod, VMs hosted in other bottom pods are connected to this external network via a bridge network, using the same physical network as the east- west networking, but different vlan. Change-Id: I953322737aa97b2d1ebd9a15dc479d7aba753678
This commit is contained in:
parent
1a4709eb49
commit
8dd1b87e26
125
README.md
125
README.md
@ -156,7 +156,29 @@ the network.
|
||||
To play cross-pod L3 networking, two nodes are needed. One to run Tricircle
|
||||
and one bottom pod, the other one to run another bottom pod. Both nodes have
|
||||
two network interfaces, for management and provider VLAN network. For VLAN
|
||||
network, the physical network infrastructure should support VLAN tagging.
|
||||
network, the physical network infrastructure should support VLAN tagging. If
|
||||
you would like to try north-south networking, too, you should prepare one more
|
||||
network interface in the second node for external network. In this guide, the
|
||||
external network is also vlan type, so the local.conf sample is based on vlan
|
||||
type external network setup.
|
||||
|
||||
> DevStack supports multiple regions sharing the same Keystone, but one recent
|
||||
> merged [patch](https://github.com/openstack-dev/devstack/commit/923be5f791c78fa9f21b2e217a6b61328c493a38#diff-4f76c30de6fd72bd49643dbcf1007a61)
|
||||
> introduces a bug to DevStack so you may have problem deploying Tricircle if
|
||||
> you use the newest DevStack code. One quick fix is:
|
||||
```
|
||||
> diff --git a/stack.sh b/stack.sh
|
||||
> index c21ff77..0f8251e 100755
|
||||
> --- a/stack.sh
|
||||
> +++ b/stack.sh
|
||||
> @@ -1024,7 +1024,7 @@ export OS_USER_DOMAIN_ID=default
|
||||
> export OS_PASSWORD=$ADMIN_PASSWORD
|
||||
> export OS_PROJECT_NAME=admin
|
||||
> export OS_PROJECT_DOMAIN_ID=default
|
||||
> -export OS_REGION_NAME=$REGION_NAME
|
||||
> +export OS_REGION_NAME=RegionOne
|
||||
```
|
||||
> RegionOne is the region name of top OpenStack(Tricircle).
|
||||
|
||||
### Setup
|
||||
|
||||
@ -168,10 +190,18 @@ In node1,
|
||||
local.conf, change password in the file if needed.
|
||||
- 4 Change the following options according to your environment:
|
||||
```
|
||||
HOST_IP=10.250.201.24 - change to your management interface ip
|
||||
TENANT_VLAN_RANGE=2001:3000 - change to VLAN range your physical network supports
|
||||
PHYSICAL_NETWORK=bridge - change to whatever you like
|
||||
OVS_PHYSICAL_BRIDGE=br-bridge - change to whatever you like
|
||||
HOST_IP=10.250.201.24
|
||||
- change to your management interface ip.
|
||||
Q_ML2_PLUGIN_VLAN_TYPE_OPTIONS=(network_vlan_ranges=bridge:2001:3000)
|
||||
- the format is (network_vlan_ranges=<physical network name>:<min vlan>:<max vlan>),
|
||||
you can change physical network name, but remember to adapt your change to the
|
||||
commands showed in this guide; also, change min vlan and max vlan to adapt the
|
||||
vlan range your physical network supports.
|
||||
OVS_BRIDGE_MAPPINGS=bridge:br-bridge
|
||||
- the format is <physical network name>:<ovs bridge name>, you can change these names,
|
||||
but remember to adapt your change to the commands showed in this guide.
|
||||
Q_USE_PROVIDERNET_FOR_PUBLIC=True
|
||||
- use this option if you would like to try L3 north-south networking.
|
||||
```
|
||||
Tricircle doesn't support security group currently so we use these two options
|
||||
to disable security group functionality.
|
||||
@ -197,14 +227,32 @@ In node2,
|
||||
local.conf, change password in the file if needed.
|
||||
- 4 Change the following options according to your environment:
|
||||
```
|
||||
HOST_IP=10.250.201.25 - change to your management interface ip
|
||||
KEYSTONE_SERVICE_HOST=10.250.201.24 - change to management interface ip of node1
|
||||
KEYSTONE_AUTH_HOST=10.250.201.24 - change to management interface ip of node1
|
||||
GLANCE_SERVICE_HOST=10.250.201.24 - change to management interface ip of node1
|
||||
TENANT_VLAN_RANGE=2001:3000 - change to VLAN range your physical network supports
|
||||
PHYSICAL_NETWORK=bridge - change to whatever you like
|
||||
OVS_PHYSICAL_BRIDGE=br-bridge - change to whatever you like
|
||||
HOST_IP=10.250.201.25
|
||||
- change to your management interface ip.
|
||||
KEYSTONE_SERVICE_HOST=10.250.201.24
|
||||
- change to management interface ip of node1.
|
||||
KEYSTONE_AUTH_HOST=10.250.201.24
|
||||
- change to management interface ip of node1.
|
||||
GLANCE_SERVICE_HOST=10.250.201.24
|
||||
- change to management interface ip of node1.
|
||||
Q_ML2_PLUGIN_VLAN_TYPE_OPTIONS=(network_vlan_ranges=bridge:2001:3000,extern:3001:4000)
|
||||
- the format is (network_vlan_ranges=<physical network name>:<min vlan>:<max vlan>),
|
||||
you can change physical network name, but remember to adapt your change to the
|
||||
commands showed in this guide; also, change min vlan and max vlan to adapt the
|
||||
vlan range your physical network supports.
|
||||
OVS_BRIDGE_MAPPINGS=bridge:br-bridge,externn:br-ext
|
||||
- the format is <physical network name>:<ovs bridge name>, you can change these names,
|
||||
but remember to adapt your change to the commands showed in this guide.
|
||||
Q_USE_PROVIDERNET_FOR_PUBLIC=True
|
||||
- use this option if you would like to try L3 north-south networking.
|
||||
```
|
||||
In this guide, we define two physical networks in node2, one is "bridge" for
|
||||
bridge network, the other one is "extern" for external network. If you do not
|
||||
want to try L3 north-south networking, you can simply remove the "extern" part.
|
||||
The external network type we use in the guide is vlan, if you want to use other
|
||||
network type like flat, please refer to
|
||||
[DevStack document](http://docs.openstack.org/developer/devstack/).
|
||||
|
||||
- 5 Create OVS bridge and attach the VLAN network interface to it
|
||||
```
|
||||
sudo ovs-vsctl add-br br-bridge
|
||||
@ -259,10 +307,10 @@ curl -X POST http://127.0.0.1:19999/v1.0/pods -H "Content-Type: application/json
|
||||
- 3 Create network with AZ scheduler hints specified
|
||||
```
|
||||
curl -X POST http://127.0.0.1:9696/v2.0/networks -H "Content-Type: application/json" \
|
||||
-H "X-Auth-Token: b5dc59ebfdb74dbfa2a6351682d10a6e" \
|
||||
-H "X-Auth-Token: $token" \
|
||||
-d '{"network": {"name": "net1", "admin_state_up": true, "availability_zone_hints": ["az1"]}}'
|
||||
curl -X POST http://127.0.0.1:9696/v2.0/networks -H "Content-Type: application/json" \
|
||||
-H "X-Auth-Token: b5dc59ebfdb74dbfa2a6351682d10a6e" \
|
||||
-H "X-Auth-Token: $token" \
|
||||
-d '{"network": {"name": "net2", "admin_state_up": true, "availability_zone_hints": ["az2"]}}'
|
||||
```
|
||||
Here we create two networks separately bound to Pod1 and Pod2
|
||||
@ -296,3 +344,52 @@ nova --os-region-name Pod2 get-vnc-console vm2 novnc
|
||||
Login one virtual machine via VNC and you should find it can "ping" the other
|
||||
virtual machine. Security group functionality is disabled in bottom OpenStack
|
||||
so no need to configure security group rule.
|
||||
|
||||
### North-South Networking
|
||||
|
||||
Before running DevStack in node2, you need to create another ovs bridge for
|
||||
external network and then attach port.
|
||||
```
|
||||
sudo ovs-vsctl add-br br-ext
|
||||
sudo ovs-vsctl add-port br-ext eth2
|
||||
```
|
||||
|
||||
Below listed the operations related to north-south networking:
|
||||
- 1 Create external network
|
||||
```
|
||||
curl -X POST http://127.0.0.1:9696/v2.0/networks -H "Content-Type: application/json" \
|
||||
-H "X-Auth-Token: $token" \
|
||||
-d '{"network": {"name": "ext-net", "admin_state_up": true, "router:external": true, "provider:network_type": "vlan", "provider:physical_network": "extern", "availability_zone_hints": ["Pod2"]}}'
|
||||
```
|
||||
Pay attention that when creating external network, we still need to pass
|
||||
"availability_zone_hints" parameter, but the value we pass is the name of pod,
|
||||
not the name of availability zone.
|
||||
> Currently external network needs to be created before attaching subnet to the
|
||||
> router, because plugin needs to utilize external network information to setup
|
||||
> bridge network when handling interface adding operation. This limitation will
|
||||
> be removed later.
|
||||
|
||||
- 2 Create external subnet
|
||||
```
|
||||
neutron subnet-create --name ext-subnet --disable-dhcp ext-net 163.3.124.0/24
|
||||
```
|
||||
- 3 Set router external gateway
|
||||
```
|
||||
neutron router-gateway-set router ext-net
|
||||
```
|
||||
Now virtual machine in the subnet attached to the router should be able to
|
||||
"ping" machines in the external network. In our test, we use hypervisor tool
|
||||
to directly start a virtual machine in the external network to check the
|
||||
network connectivity.
|
||||
- 4 Create floating ip
|
||||
```
|
||||
neutron floatingip-create ext-net
|
||||
```
|
||||
- 5 Associate floating ip
|
||||
```
|
||||
neutron floatingip-list
|
||||
neutron port-list
|
||||
neutron floatingip-associate $floatingip_id $port_id
|
||||
```
|
||||
Now you should be able to access virtual machine with floating ip bound from
|
||||
the external network.
|
||||
|
@ -37,11 +37,11 @@ PUBLIC_NETWORK_GATEWAY=10.100.100.3
|
||||
Q_USE_SECGROUP=False
|
||||
LIBVIRT_FIREWALL_DRIVER=nova.virt.firewall.NoopFirewallDriver
|
||||
NEUTRON_CREATE_INITIAL_NETWORKS=False
|
||||
Q_USE_PROVIDERNET_FOR_PUBLIC=True
|
||||
|
||||
HOST_IP=10.250.201.24
|
||||
TENANT_VLAN_RANGE=2001:3000
|
||||
OVS_PHYSICAL_BRIDGE=br-bridge
|
||||
PHYSICAL_NETWORK=bridge
|
||||
Q_ML2_PLUGIN_VLAN_TYPE_OPTIONS=(network_vlan_ranges=bridge:2001:3000)
|
||||
OVS_BRIDGE_MAPPINGS=bridge:br-bridge
|
||||
|
||||
Q_ENABLE_TRICIRCLE=True
|
||||
enable_plugin tricircle https://github.com/openstack/tricircle/
|
||||
|
@ -35,6 +35,7 @@ PUBLIC_NETWORK_GATEWAY=10.100.100.3
|
||||
Q_USE_SECGROUP=False
|
||||
LIBVIRT_FIREWALL_DRIVER=nova.virt.firewall.NoopFirewallDriver
|
||||
NEUTRON_CREATE_INITIAL_NETWORKS=False
|
||||
Q_USE_PROVIDERNET_FOR_PUBLIC=True
|
||||
|
||||
HOST_IP=10.250.201.25
|
||||
REGION_NAME=Pod2
|
||||
@ -43,9 +44,8 @@ KEYSTONE_SERVICE_HOST=10.250.201.24
|
||||
KEYSTONE_AUTH_HOST=10.250.201.24
|
||||
GLANCE_SERVICE_HOST=10.250.201.24
|
||||
|
||||
TENANT_VLAN_RANGE=2001:3000
|
||||
OVS_PHYSICAL_BRIDGE=br-bridge
|
||||
PHYSICAL_NETWORK=bridge
|
||||
Q_ML2_PLUGIN_VLAN_TYPE_OPTIONS=(network_vlan_ranges=bridge:2001:3000,extern:3001:4000)
|
||||
OVS_BRIDGE_MAPPINGS=bridge:br-bridge,externn:br-ext
|
||||
|
||||
# Use Neutron instead of nova-network
|
||||
disable_service n-net
|
||||
|
@ -270,13 +270,15 @@ if [[ "$Q_ENABLE_TRICIRCLE" == "True" ]]; then
|
||||
iniset $NEUTRON_CONF DEFAULT core_plugin "$Q_PLUGIN_CLASS"
|
||||
iniset $NEUTRON_CONF DEFAULT service_plugins ""
|
||||
iniset $NEUTRON_CONF DEFAULT tricircle_db_connection `database_connection_url tricircle`
|
||||
iniset $NEUTRON_CONF DEFAULT notify_nova_on_port_data_changes False
|
||||
iniset $NEUTRON_CONF DEFAULT notify_nova_on_port_status_changes False
|
||||
iniset $NEUTRON_CONF client admin_username admin
|
||||
iniset $NEUTRON_CONF client admin_password $ADMIN_PASSWORD
|
||||
iniset $NEUTRON_CONF client admin_tenant demo
|
||||
iniset $NEUTRON_CONF client auto_refresh_endpoint True
|
||||
iniset $NEUTRON_CONF client top_pod_name $REGION_NAME
|
||||
|
||||
iniset $NEUTRON_CONF tricircle bridge_physical_network $PHYSICAL_NETWORK
|
||||
iniset $NEUTRON_CONF tricircle bridge_physical_network `echo $OVS_BRIDGE_MAPPINGS | awk -F: '{print $1}'`
|
||||
fi
|
||||
|
||||
elif [[ "$1" == "stack" && "$2" == "extra" ]]; then
|
||||
|
@ -345,6 +345,7 @@ class Client(object):
|
||||
network -> body -> none
|
||||
subnet -> body -> none
|
||||
port -> body -> none
|
||||
floatingip -> body -> none
|
||||
--------------------------
|
||||
:return: a dict containing resource information
|
||||
:raises: EndpointNotAvailable
|
||||
@ -448,6 +449,7 @@ class Client(object):
|
||||
aggregate -> add_host -> aggregate, host -> none
|
||||
volume -> set_bootable -> volume, flag -> none
|
||||
router -> add_interface -> router, body -> none
|
||||
router -> add_gateway -> router, body -> none
|
||||
--------------------------
|
||||
:return: None
|
||||
:raises: EndpointNotAvailable
|
||||
|
@ -40,7 +40,14 @@ R_LIBERTY = 'liberty'
|
||||
R_MITAKA = 'mitaka'
|
||||
|
||||
# l3 bridge networking elements
|
||||
bridge_subnet_pool_name = 'bridge_subnet_pool'
|
||||
bridge_net_name = 'bridge_net_%s'
|
||||
bridge_subnet_name = 'bridge_subnet_%s'
|
||||
bridge_port_name = 'bridge_port_%s_%s'
|
||||
ew_bridge_subnet_pool_name = 'ew_bridge_subnet_pool'
|
||||
ew_bridge_net_name = 'ew_bridge_net_%s' # project_id
|
||||
ew_bridge_subnet_name = 'ew_bridge_subnet_%s' # project_id
|
||||
ew_bridge_port_name = 'ew_bridge_port_%s_%s' # project_id b_router_id
|
||||
|
||||
ns_bridge_subnet_pool_name = 'ns_bridge_subnet_pool'
|
||||
ns_bridge_net_name = 'ns_bridge_net_%s' # project_id
|
||||
ns_bridge_subnet_name = 'ns_bridge_subnet_%s' # project_id
|
||||
# for external gateway port: project_id b_router_id None
|
||||
# for floating ip port: project_id None b_internal_port_id
|
||||
ns_bridge_port_name = 'ns_bridge_port_%s_%s_%s'
|
||||
|
@ -164,3 +164,17 @@ class ReservationNotFound(QuotaNotFound):
|
||||
|
||||
class OverQuota(TricircleException):
|
||||
message = _("Quota exceeded for resources: %(overs)s")
|
||||
|
||||
|
||||
class ExternalNetPodNotSpecify(TricircleException):
|
||||
message = "Pod for external network not specified"
|
||||
|
||||
def __init__(self):
|
||||
super(ExternalNetPodNotSpecify, self).__init__()
|
||||
|
||||
|
||||
class PodNotFound(NotFound):
|
||||
message = "Pod %(pod_name)s could not be found."
|
||||
|
||||
def __init__(self, pod_name):
|
||||
super(PodNotFound, self).__init__(pod_name=pod_name)
|
||||
|
@ -121,7 +121,8 @@ class NeutronResourceHandle(ResourceHandle):
|
||||
'port': LIST | CREATE | DELETE | GET,
|
||||
'router': LIST | CREATE | ACTION | UPDATE,
|
||||
'security_group': LIST,
|
||||
'security_group_rule': LIST}
|
||||
'security_group_rule': LIST,
|
||||
'floatingip': LIST | CREATE}
|
||||
|
||||
def _get_client(self, cxt):
|
||||
return q_client.Client('2.0',
|
||||
|
@ -96,6 +96,7 @@ def get_bottom_mappings_by_top_id(context, top_id, resource_type):
|
||||
|
||||
:param context: context object
|
||||
:param top_id: resource id on top
|
||||
:param resource_type: resource type
|
||||
:return: a list of tuple (pod dict, bottom_id)
|
||||
"""
|
||||
route_filters = [{'key': 'top_id', 'comparator': 'eq', 'value': top_id},
|
||||
@ -114,6 +115,22 @@ def get_bottom_mappings_by_top_id(context, top_id, resource_type):
|
||||
return mappings
|
||||
|
||||
|
||||
def get_bottom_id_by_top_id_pod_name(context, top_id, pod_name, resource_type):
|
||||
"""Get resource bottom id by top id and bottom pod name
|
||||
|
||||
:param context: context object
|
||||
:param top_id: resource id on top
|
||||
:param pod_name: name of bottom pod
|
||||
:param resource_type: resource type
|
||||
:return:
|
||||
"""
|
||||
mappings = get_bottom_mappings_by_top_id(context, top_id, resource_type)
|
||||
for pod, bottom_id in mappings:
|
||||
if pod['pod_name'] == pod_name:
|
||||
return bottom_id
|
||||
return None
|
||||
|
||||
|
||||
def get_bottom_mappings_by_tenant_pod(context,
|
||||
tenant_id,
|
||||
pod_id,
|
||||
|
@ -19,6 +19,7 @@ from oslo_log import log
|
||||
from oslo_utils import uuidutils
|
||||
|
||||
from neutron.api.v2 import attributes
|
||||
from neutron.common import constants
|
||||
from neutron.common import exceptions
|
||||
from neutron.db import common_db_mixin
|
||||
from neutron.db import db_base_plugin_v2
|
||||
@ -33,8 +34,11 @@ from neutron.db import portbindings_db
|
||||
from neutron.db import securitygroups_db
|
||||
from neutron.db import sqlalchemyutils
|
||||
from neutron.extensions import availability_zone as az_ext
|
||||
from neutron.extensions import external_net
|
||||
from neutron.extensions import l3
|
||||
from neutron.plugins.ml2.drivers import type_vlan
|
||||
import neutron.plugins.ml2.models as ml2_models
|
||||
import neutronclient.common.exceptions as q_cli_exceptions
|
||||
|
||||
from sqlalchemy import sql
|
||||
|
||||
@ -42,6 +46,7 @@ from tricircle.common import az_ag
|
||||
import tricircle.common.client as t_client
|
||||
import tricircle.common.constants as t_constants
|
||||
import tricircle.common.context as t_context
|
||||
import tricircle.common.exceptions as t_exceptions
|
||||
from tricircle.common.i18n import _
|
||||
from tricircle.common.i18n import _LI
|
||||
import tricircle.common.lock_handle as t_lock
|
||||
@ -93,6 +98,7 @@ class TricirclePlugin(db_base_plugin_v2.NeutronDbPluginV2,
|
||||
"security-group",
|
||||
"external-net",
|
||||
"availability_zone",
|
||||
"provider",
|
||||
"network_availability_zone",
|
||||
"router"]
|
||||
|
||||
@ -124,18 +130,24 @@ class TricirclePlugin(db_base_plugin_v2.NeutronDbPluginV2,
|
||||
# return self.conn.consume_in_threads()
|
||||
|
||||
@staticmethod
|
||||
def _validate_availability_zones(context, az_list):
|
||||
def _validate_availability_zones(context, az_list, external):
|
||||
if not az_list:
|
||||
return
|
||||
t_ctx = t_context.get_context_from_neutron_context(context)
|
||||
with context.session.begin():
|
||||
pods = core.query_resource(t_ctx, models.Pod, [], [])
|
||||
az_set = set(az_list)
|
||||
known_az_set = set([pod['az_name'] for pod in pods])
|
||||
if external:
|
||||
known_az_set = set([pod['pod_name'] for pod in pods])
|
||||
else:
|
||||
known_az_set = set([pod['az_name'] for pod in pods])
|
||||
diff = az_set - known_az_set
|
||||
if diff:
|
||||
raise az_ext.AvailabilityZoneNotFound(
|
||||
availability_zone=diff.pop())
|
||||
if external:
|
||||
raise t_exceptions.PodNotFound(pod_name=diff.pop())
|
||||
else:
|
||||
raise az_ext.AvailabilityZoneNotFound(
|
||||
availability_zone=diff.pop())
|
||||
|
||||
@staticmethod
|
||||
def _extend_availability_zone(net_res, net_db):
|
||||
@ -145,21 +157,87 @@ class TricirclePlugin(db_base_plugin_v2.NeutronDbPluginV2,
|
||||
common_db_mixin.CommonDbMixin.register_dict_extend_funcs(
|
||||
attributes.NETWORKS, ['_extend_availability_zone'])
|
||||
|
||||
@staticmethod
|
||||
def _ensure_az_set_for_external_network(req_data):
|
||||
external = req_data.get(external_net.EXTERNAL)
|
||||
external_set = attributes.is_attr_set(external)
|
||||
if not external_set or not external:
|
||||
return False
|
||||
if az_ext.AZ_HINTS in req_data and req_data[az_ext.AZ_HINTS]:
|
||||
return True
|
||||
raise t_exceptions.ExternalNetPodNotSpecify()
|
||||
|
||||
def _create_bottom_external_network(self, context, net, top_id):
|
||||
t_ctx = t_context.get_context_from_neutron_context(context)
|
||||
# use the first pod
|
||||
pod_name = net[az_ext.AZ_HINTS][0]
|
||||
pod = db_api.get_pod_by_name(t_ctx, pod_name)
|
||||
body = {
|
||||
'network': {
|
||||
'name': top_id,
|
||||
'tenant_id': net['tenant_id'],
|
||||
'admin_state_up': True,
|
||||
external_net.EXTERNAL: True
|
||||
}
|
||||
}
|
||||
provider_attrs = ('provider:network_type', 'provider:segmentation_id',
|
||||
'provider:physical_network')
|
||||
for provider_attr in provider_attrs:
|
||||
if attributes.is_attr_set(net.get(provider_attr)):
|
||||
body['network'][provider_attr] = net[provider_attr]
|
||||
|
||||
self._prepare_bottom_element(
|
||||
t_ctx, net['tenant_id'], pod, {'id': top_id},
|
||||
t_constants.RT_NETWORK, body)
|
||||
|
||||
def _create_bottom_external_subnet(self, context, subnet, net, top_id):
|
||||
t_ctx = t_context.get_context_from_neutron_context(context)
|
||||
pod_name = net[az_ext.AZ_HINTS][0]
|
||||
pod = db_api.get_pod_by_name(t_ctx, pod_name)
|
||||
b_net_id = db_api.get_bottom_id_by_top_id_pod_name(
|
||||
t_ctx, net['id'], pod_name, t_constants.RT_NETWORK)
|
||||
body = {
|
||||
'subnet': {
|
||||
'name': top_id,
|
||||
'network_id': b_net_id,
|
||||
'tenant_id': subnet['tenant_id']
|
||||
}
|
||||
}
|
||||
attrs = ('ip_version', 'cidr', 'gateway_ip', 'allocation_pools',
|
||||
'enable_dhcp')
|
||||
for attr in attrs:
|
||||
if attributes.is_attr_set(subnet.get(attr)):
|
||||
body['subnet'][attr] = subnet[attr]
|
||||
self._prepare_bottom_element(
|
||||
t_ctx, subnet['tenant_id'], pod, {'id': top_id},
|
||||
t_constants.RT_SUBNET, body)
|
||||
|
||||
@property
|
||||
def _core_plugin(self):
|
||||
return self
|
||||
|
||||
def create_network(self, context, network):
|
||||
net_data = network['network']
|
||||
res = super(TricirclePlugin, self).create_network(context, network)
|
||||
is_external = self._ensure_az_set_for_external_network(net_data)
|
||||
if az_ext.AZ_HINTS in net_data:
|
||||
self._validate_availability_zones(context,
|
||||
net_data[az_ext.AZ_HINTS])
|
||||
az_hints = az_ext.convert_az_list_to_string(
|
||||
net_data[az_ext.AZ_HINTS])
|
||||
update_res = super(TricirclePlugin, self).update_network(
|
||||
context, res['id'], {'network': {az_ext.AZ_HINTS: az_hints}})
|
||||
res[az_ext.AZ_HINTS] = update_res[az_ext.AZ_HINTS]
|
||||
net_data[az_ext.AZ_HINTS],
|
||||
is_external)
|
||||
with context.session.begin(subtransactions=True):
|
||||
res = super(TricirclePlugin, self).create_network(context, network)
|
||||
if az_ext.AZ_HINTS in net_data:
|
||||
az_hints = az_ext.convert_az_list_to_string(
|
||||
net_data[az_ext.AZ_HINTS])
|
||||
update_res = super(TricirclePlugin, self).update_network(
|
||||
context, res['id'],
|
||||
{'network': {az_ext.AZ_HINTS: az_hints}})
|
||||
res[az_ext.AZ_HINTS] = update_res[az_ext.AZ_HINTS]
|
||||
self._process_l3_create(context, res, net_data)
|
||||
# put inside a session so when bottom operations fails db can
|
||||
# rollback
|
||||
if is_external:
|
||||
self._create_bottom_external_network(
|
||||
context, net_data, res['id'])
|
||||
return res
|
||||
|
||||
def delete_network(self, context, network_id):
|
||||
@ -193,7 +271,16 @@ class TricirclePlugin(db_base_plugin_v2.NeutronDbPluginV2,
|
||||
context, network_id, network)
|
||||
|
||||
def create_subnet(self, context, subnet):
|
||||
return super(TricirclePlugin, self).create_subnet(context, subnet)
|
||||
subnet_data = subnet['subnet']
|
||||
network = self.get_network(context, subnet_data['network_id'])
|
||||
with context.session.begin(subtransactions=True):
|
||||
res = super(TricirclePlugin, self).create_subnet(context, subnet)
|
||||
# put inside a session so when bottom operations fails db can
|
||||
# rollback
|
||||
if network.get(external_net.EXTERNAL):
|
||||
self._create_bottom_external_subnet(
|
||||
context, res, network, res['id'])
|
||||
return res
|
||||
|
||||
def delete_subnet(self, context, subnet_id):
|
||||
t_ctx = t_context.get_context_from_neutron_context(context)
|
||||
@ -571,9 +658,13 @@ class TricirclePlugin(db_base_plugin_v2.NeutronDbPluginV2,
|
||||
project_id, pod, ele, _type, body,
|
||||
list_resources, create_resources)
|
||||
|
||||
def _get_bridge_subnet_pool_id(self, t_ctx, q_ctx, project_id, pod):
|
||||
pool_name = t_constants.bridge_subnet_pool_name
|
||||
pool_cidr = '100.0.0.0/8'
|
||||
def _get_bridge_subnet_pool_id(self, t_ctx, q_ctx, project_id, pod, is_ew):
|
||||
if is_ew:
|
||||
pool_name = t_constants.ew_bridge_subnet_pool_name
|
||||
pool_cidr = '100.0.0.0/9'
|
||||
else:
|
||||
pool_name = t_constants.ns_bridge_subnet_pool_name
|
||||
pool_cidr = '100.128.0.0/9'
|
||||
pool_ele = {'id': pool_name}
|
||||
body = {'subnetpool': {'tenant_id': project_id,
|
||||
'name': pool_name,
|
||||
@ -589,22 +680,28 @@ class TricirclePlugin(db_base_plugin_v2.NeutronDbPluginV2,
|
||||
|
||||
return pool_id
|
||||
|
||||
def _get_bridge_network_subnet(self, t_ctx, q_ctx,
|
||||
project_id, pod, pool_id):
|
||||
bridge_net_name = t_constants.bridge_net_name % project_id
|
||||
bridge_net_ele = {'id': bridge_net_name}
|
||||
bridge_subnet_name = t_constants.bridge_subnet_name % project_id
|
||||
bridge_subnet_ele = {'id': bridge_subnet_name}
|
||||
def _get_bridge_network_subnet(self, t_ctx, q_ctx, project_id, pod,
|
||||
pool_id, is_ew):
|
||||
if is_ew:
|
||||
net_name = t_constants.ew_bridge_net_name % project_id
|
||||
net_ele = {'id': net_name}
|
||||
subnet_name = t_constants.ew_bridge_subnet_name % project_id
|
||||
subnet_ele = {'id': subnet_name}
|
||||
else:
|
||||
net_name = t_constants.ns_bridge_net_name % project_id
|
||||
net_ele = {'id': net_name}
|
||||
subnet_name = t_constants.ns_bridge_subnet_name % project_id
|
||||
subnet_ele = {'id': subnet_name}
|
||||
|
||||
is_admin = q_ctx.is_admin
|
||||
q_ctx.is_admin = True
|
||||
|
||||
net_body = {'network': {'tenant_id': project_id,
|
||||
'name': bridge_net_name,
|
||||
'name': net_name,
|
||||
'shared': False,
|
||||
'admin_state_up': True}}
|
||||
_, net_id = self._prepare_top_element(
|
||||
t_ctx, q_ctx, project_id, pod, bridge_net_ele, 'network', net_body)
|
||||
t_ctx, q_ctx, project_id, pod, net_ele, 'network', net_body)
|
||||
|
||||
# allocate a VLAN id for bridge network
|
||||
phy_net = cfg.CONF.tricircle.bridge_physical_network
|
||||
@ -628,7 +725,7 @@ class TricirclePlugin(db_base_plugin_v2.NeutronDbPluginV2,
|
||||
subnet_body = {
|
||||
'subnet': {
|
||||
'network_id': net_id,
|
||||
'name': bridge_subnet_name,
|
||||
'name': subnet_name,
|
||||
'prefixlen': 24,
|
||||
'ip_version': 4,
|
||||
'allocation_pools': attributes.ATTR_NOT_SPECIFIED,
|
||||
@ -642,7 +739,7 @@ class TricirclePlugin(db_base_plugin_v2.NeutronDbPluginV2,
|
||||
}
|
||||
_, subnet_id = self._prepare_top_element(
|
||||
t_ctx, q_ctx,
|
||||
project_id, pod, bridge_subnet_ele, 'subnet', subnet_body)
|
||||
project_id, pod, subnet_ele, 'subnet', subnet_body)
|
||||
|
||||
q_ctx.is_admin = is_admin
|
||||
|
||||
@ -692,15 +789,20 @@ class TricirclePlugin(db_base_plugin_v2.NeutronDbPluginV2,
|
||||
return port_id
|
||||
|
||||
def _get_bridge_interface(self, t_ctx, q_ctx, project_id, pod,
|
||||
t_net_id, b_router_id):
|
||||
bridge_port_name = t_constants.bridge_port_name % (project_id,
|
||||
t_net_id, b_router_id, b_port_id, is_ew):
|
||||
if is_ew:
|
||||
port_name = t_constants.ew_bridge_port_name % (project_id,
|
||||
b_router_id)
|
||||
bridge_port_ele = {'id': bridge_port_name}
|
||||
else:
|
||||
port_name = t_constants.ns_bridge_port_name % (project_id,
|
||||
b_router_id,
|
||||
b_port_id)
|
||||
port_ele = {'id': port_name}
|
||||
port_body = {
|
||||
'port': {
|
||||
'tenant_id': project_id,
|
||||
'admin_state_up': True,
|
||||
'name': bridge_port_name,
|
||||
'name': port_name,
|
||||
'network_id': t_net_id,
|
||||
'device_id': '',
|
||||
'device_owner': '',
|
||||
@ -709,11 +811,11 @@ class TricirclePlugin(db_base_plugin_v2.NeutronDbPluginV2,
|
||||
}
|
||||
}
|
||||
_, port_id = self._prepare_top_element(
|
||||
t_ctx, q_ctx, project_id, pod, bridge_port_ele, 'port', port_body)
|
||||
t_ctx, q_ctx, project_id, pod, port_ele, 'port', port_body)
|
||||
return self.get_port(q_ctx, port_id)
|
||||
|
||||
def _get_bottom_bridge_elements(self, q_ctx, project_id,
|
||||
pod, t_net, t_subnet, t_port):
|
||||
pod, t_net, is_external, t_subnet, t_port):
|
||||
t_ctx = t_context.get_context_from_neutron_context(q_ctx)
|
||||
|
||||
phy_net = cfg.CONF.tricircle.bridge_physical_network
|
||||
@ -728,6 +830,8 @@ class TricirclePlugin(db_base_plugin_v2.NeutronDbPluginV2,
|
||||
'provider:physical_network': phy_net,
|
||||
'provider:segmentation_id': vlan,
|
||||
'admin_state_up': True}}
|
||||
if is_external:
|
||||
net_body['network'][external_net.EXTERNAL] = True
|
||||
_, b_net_id = self._prepare_bottom_element(
|
||||
t_ctx, project_id, pod, t_net, 'network', net_body)
|
||||
|
||||
@ -737,24 +841,35 @@ class TricirclePlugin(db_base_plugin_v2.NeutronDbPluginV2,
|
||||
'cidr': t_subnet['cidr'],
|
||||
'enable_dhcp': False,
|
||||
'tenant_id': project_id}}
|
||||
# In the pod hosting external network, where ns bridge network is used
|
||||
# as an internal network, need to allocate ip address from .3 because
|
||||
# .2 is used by the router gateway port in the pod hosting servers,
|
||||
# where ns bridge network is used as an external network.
|
||||
# if t_subnet['name'].startswith('ns_bridge_') and not is_external:
|
||||
# prefix = t_subnet['cidr'][:t_subnet['cidr'].rindex('.')]
|
||||
# subnet_body['subnet']['allocation_pools'] = [
|
||||
# {'start': prefix + '.3', 'end': prefix + '.254'}]
|
||||
_, b_subnet_id = self._prepare_bottom_element(
|
||||
t_ctx, project_id, pod, t_subnet, 'subnet', subnet_body)
|
||||
|
||||
port_body = {
|
||||
'port': {
|
||||
'tenant_id': project_id,
|
||||
'admin_state_up': True,
|
||||
'name': t_port['id'],
|
||||
'network_id': b_net_id,
|
||||
'fixed_ips': [
|
||||
{'subnet_id': b_subnet_id,
|
||||
'ip_address': t_port['fixed_ips'][0]['ip_address']}]
|
||||
if t_port:
|
||||
port_body = {
|
||||
'port': {
|
||||
'tenant_id': project_id,
|
||||
'admin_state_up': True,
|
||||
'name': t_port['id'],
|
||||
'network_id': b_net_id,
|
||||
'fixed_ips': [
|
||||
{'subnet_id': b_subnet_id,
|
||||
'ip_address': t_port['fixed_ips'][0]['ip_address']}]
|
||||
}
|
||||
}
|
||||
}
|
||||
is_new, b_port_id = self._prepare_bottom_element(
|
||||
t_ctx, project_id, pod, t_port, 'port', port_body)
|
||||
is_new, b_port_id = self._prepare_bottom_element(
|
||||
t_ctx, project_id, pod, t_port, 'port', port_body)
|
||||
|
||||
return is_new, b_port_id
|
||||
return is_new, b_port_id, b_subnet_id, b_net_id
|
||||
else:
|
||||
return None, None, b_subnet_id, b_net_id
|
||||
|
||||
# NOTE(zhiyuan) the origin implementation in l3_db uses port returned from
|
||||
# get_port in core plugin to check, change it to base plugin, since only
|
||||
@ -778,6 +893,85 @@ class TricirclePlugin(db_base_plugin_v2.NeutronDbPluginV2,
|
||||
query = context.session.query(l3_db.RouterPort)
|
||||
query.filter_by(port_id=port_id, router_id=router_id).delete()
|
||||
|
||||
def _update_bottom_router_gateway(self, context, router_id, router_data):
|
||||
ext_net_id = router_data[l3.EXTERNAL_GW_INFO].get('network_id')
|
||||
if ext_net_id:
|
||||
# add router gateway
|
||||
t_ctx = t_context.get_context_from_neutron_context(context)
|
||||
network = self.get_network(context, ext_net_id)
|
||||
pod_name = network[az_ext.AZ_HINTS][0]
|
||||
pod = db_api.get_pod_by_name(t_ctx, pod_name)
|
||||
b_net_id = db_api.get_bottom_id_by_top_id_pod_name(
|
||||
t_ctx, ext_net_id, pod_name, t_constants.RT_NETWORK)
|
||||
t_router = self._get_router(context, router_id)
|
||||
body = {'router': {'name': router_id,
|
||||
'distributed': False}}
|
||||
_, b_router_id = self._prepare_bottom_element(
|
||||
t_ctx, t_router['tenant_id'], pod, t_router,
|
||||
t_constants.RT_ROUTER, body)
|
||||
b_client = self._get_client(pod_name)
|
||||
t_info = router_data[l3.EXTERNAL_GW_INFO]
|
||||
b_info = {'network_id': b_net_id}
|
||||
if 'enable_snat' in t_info:
|
||||
b_info['enable_snat'] = t_info['enable_snat']
|
||||
if 'external_fixed_ips' in t_info:
|
||||
fixed_ips = []
|
||||
for ip in t_info['external_fixed_ips']:
|
||||
t_subnet_id = ip['subnet_id']
|
||||
b_subnet_id = db_api.get_bottom_id_by_top_id_pod_name(
|
||||
t_ctx, t_subnet_id, pod_name,
|
||||
t_constants.RT_SUBNET)
|
||||
fixed_ips.append({'subnet_id': b_subnet_id,
|
||||
'ip_address': ip['ip_address']})
|
||||
b_info['external_fixed_ips'] = fixed_ips
|
||||
b_client.action_routers(t_ctx, 'add_gateway', b_router_id, b_info)
|
||||
|
||||
# create bridge network and attach to router
|
||||
t_pod = db_api.get_top_pod(t_ctx)
|
||||
project_id = t_router['tenant_id']
|
||||
admin_project_id = 'admin_project_id'
|
||||
pool_id = self._get_bridge_subnet_pool_id(
|
||||
t_ctx, context, admin_project_id, t_pod, False)
|
||||
t_bridge_net, t_bridge_subnet = self._get_bridge_network_subnet(
|
||||
t_ctx, context, project_id, t_pod, pool_id, False)
|
||||
(_, _, b_bridge_subnet_id,
|
||||
b_bridge_net_id) = self._get_bottom_bridge_elements(
|
||||
context, project_id, pod, t_bridge_net, False, t_bridge_subnet,
|
||||
None)
|
||||
is_attach = False
|
||||
interfaces = b_client.list_ports(t_ctx,
|
||||
filters=[{'key': 'device_id',
|
||||
'comparator': 'eq',
|
||||
'value': b_router_id}])
|
||||
for interface in interfaces:
|
||||
for fixed_ip in interface['fixed_ips']:
|
||||
if fixed_ip['subnet_id'] == b_bridge_subnet_id:
|
||||
is_attach = True
|
||||
break
|
||||
if is_attach:
|
||||
break
|
||||
if not is_attach:
|
||||
b_client.action_routers(t_ctx, 'add_interface', b_router_id,
|
||||
{'subnet_id': b_bridge_subnet_id})
|
||||
|
||||
def update_router(self, context, router_id, router):
|
||||
router_data = router['router']
|
||||
# TODO(zhiyuan) solve ip address conflict issue
|
||||
# if user creates floating ip before set router gateway, we may trigger
|
||||
# ip address conflict here. let's say external cidr is 163.3.124.0/24,
|
||||
# creating floating ip before setting router gateway, the gateway ip
|
||||
# will be 163.3.124.3 since 163.3.124.2 is used by floating ip, however
|
||||
# in the bottom pod floating ip is not created when creating floating
|
||||
# ip on top, so the gateway ip in the bottom pod is still 163.3.124.2,
|
||||
# thus conflict may occur.
|
||||
#
|
||||
# before this issue is solved, user should set router gateway before
|
||||
# create floating ip.
|
||||
if attributes.is_attr_set(router_data.get(l3.EXTERNAL_GW_INFO)):
|
||||
self._update_bottom_router_gateway(context, router_id, router_data)
|
||||
return super(TricirclePlugin, self).update_router(context, router_id,
|
||||
router)
|
||||
|
||||
def add_router_interface(self, context, router_id, interface_info):
|
||||
t_ctx = t_context.get_context_from_neutron_context(context)
|
||||
|
||||
@ -790,10 +984,7 @@ class TricirclePlugin(db_base_plugin_v2.NeutronDbPluginV2,
|
||||
az, t_net = self._judge_network_across_pods(
|
||||
context, interface_info, add_by_port)
|
||||
b_pod, b_az = az_ag.get_pod_by_az_tenant(t_ctx, az, project_id)
|
||||
t_pod = None
|
||||
for pod in db_api.list_pods(t_ctx):
|
||||
if not pod['az_name']:
|
||||
t_pod = pod
|
||||
t_pod = db_api.get_top_pod(t_ctx)
|
||||
assert t_pod
|
||||
|
||||
router_body = {'router': {'name': router_id,
|
||||
@ -801,18 +992,52 @@ class TricirclePlugin(db_base_plugin_v2.NeutronDbPluginV2,
|
||||
_, b_router_id = self._prepare_bottom_element(
|
||||
t_ctx, project_id, b_pod, router, 'router', router_body)
|
||||
|
||||
# bridge network for E-W networking
|
||||
pool_id = self._get_bridge_subnet_pool_id(
|
||||
t_ctx, context, admin_project_id, t_pod)
|
||||
t_ctx, context, admin_project_id, t_pod, True)
|
||||
t_bridge_net, t_bridge_subnet = self._get_bridge_network_subnet(
|
||||
t_ctx, context, project_id, t_pod, pool_id)
|
||||
t_ctx, context, project_id, t_pod, pool_id, True)
|
||||
t_bridge_port = self._get_bridge_interface(
|
||||
t_ctx, context, project_id, t_pod, t_bridge_net['id'],
|
||||
b_router_id)
|
||||
|
||||
is_new, b_bridge_port_id = self._get_bottom_bridge_elements(
|
||||
context, project_id, b_pod, t_bridge_net, t_bridge_subnet,
|
||||
b_router_id, None, True)
|
||||
is_new, b_bridge_port_id, _, _ = self._get_bottom_bridge_elements(
|
||||
context, project_id, b_pod, t_bridge_net, False, t_bridge_subnet,
|
||||
t_bridge_port)
|
||||
|
||||
# bridge network for N-S networking
|
||||
ext_nets = self.get_networks(context, {external_net.EXTERNAL: [True]})
|
||||
if not ext_nets:
|
||||
need_ns_bridge = False
|
||||
else:
|
||||
ext_net_pod_names = set(
|
||||
[ext_net[az_ext.AZ_HINTS][0] for ext_net in ext_nets])
|
||||
if b_pod['pod_name'] in ext_net_pod_names:
|
||||
need_ns_bridge = False
|
||||
else:
|
||||
need_ns_bridge = True
|
||||
if need_ns_bridge:
|
||||
pool_id = self._get_bridge_subnet_pool_id(
|
||||
t_ctx, context, admin_project_id, t_pod, False)
|
||||
t_bridge_net, t_bridge_subnet = self._get_bridge_network_subnet(
|
||||
t_ctx, context, project_id, t_pod, pool_id, False)
|
||||
(_, _, b_bridge_subnet_id,
|
||||
b_bridge_net_id) = self._get_bottom_bridge_elements(
|
||||
context, project_id, b_pod, t_bridge_net, True,
|
||||
t_bridge_subnet, None)
|
||||
|
||||
ns_bridge_port = self._get_bridge_interface(
|
||||
t_ctx, context, project_id, t_pod, t_bridge_net['id'],
|
||||
b_router_id, None, False)
|
||||
|
||||
client = self._get_client(b_pod['pod_name'])
|
||||
# add gateway is update operation, can run multiple times
|
||||
gateway_ip = ns_bridge_port['fixed_ips'][0]['ip_address']
|
||||
client.action_routers(
|
||||
t_ctx, 'add_gateway', b_router_id,
|
||||
{'network_id': b_bridge_net_id,
|
||||
'external_fixed_ips': [{'subnet_id': b_bridge_subnet_id,
|
||||
'ip_address': gateway_ip}]})
|
||||
|
||||
# NOTE(zhiyuan) subnet pool, network, subnet are reusable resource,
|
||||
# we decide not to remove them when operation fails, so before adding
|
||||
# router interface, no clearing is needed.
|
||||
@ -899,3 +1124,148 @@ class TricirclePlugin(db_base_plugin_v2.NeutronDbPluginV2,
|
||||
# job mechanism for async operations
|
||||
self.xjob_handler.configure_extra_routes(t_ctx, router_id)
|
||||
return return_info
|
||||
|
||||
def create_floatingip(self, context, floatingip):
|
||||
# create bottom fip when associating fixed ip
|
||||
return super(TricirclePlugin, self).create_floatingip(
|
||||
context, floatingip,
|
||||
initial_status=constants.FLOATINGIP_STATUS_DOWN)
|
||||
|
||||
@staticmethod
|
||||
def _safe_create_bottom_floatingip(t_ctx, client, fip_net_id,
|
||||
fip_address, port_id):
|
||||
try:
|
||||
client.create_floatingips(
|
||||
t_ctx, {'floatingip': {'floating_network_id': fip_net_id,
|
||||
'floating_ip_address': fip_address,
|
||||
'port_id': port_id}})
|
||||
except q_cli_exceptions.IpAddressInUseClient:
|
||||
fips = client.list_floatingips(t_ctx,
|
||||
[{'key': 'floating_ip_address',
|
||||
'comparator': 'eq',
|
||||
'value': fip_address}])
|
||||
# NOTE(zhiyuan) if the internal port associated with the existing
|
||||
# fip is what we expect, just ignore this exception
|
||||
if fips[0].get('port_id') == port_id:
|
||||
pass
|
||||
else:
|
||||
raise
|
||||
|
||||
@staticmethod
|
||||
def _disassociate_floatingip(context, _id):
|
||||
with context.session.begin():
|
||||
fip_qry = context.session.query(l3_db.FloatingIP)
|
||||
floating_ips = fip_qry.filter_by(id=_id)
|
||||
for floating_ip in floating_ips:
|
||||
floating_ip.update({'fixed_port_id': None,
|
||||
'fixed_ip_address': None,
|
||||
'router_id': None})
|
||||
|
||||
def update_floatingip(self, context, _id, floatingip):
|
||||
res = super(TricirclePlugin, self).update_floatingip(
|
||||
context, _id, floatingip)
|
||||
|
||||
try:
|
||||
t_ctx = t_context.get_context_from_neutron_context(context)
|
||||
|
||||
fip = floatingip['floatingip']
|
||||
floatingip_db = self._get_floatingip(context, _id)
|
||||
int_port_id = fip['port_id']
|
||||
project_id = floatingip_db['tenant_id']
|
||||
fip_address = floatingip_db['floating_ip_address']
|
||||
mappings = db_api.get_bottom_mappings_by_top_id(
|
||||
t_ctx, int_port_id, t_constants.RT_PORT)
|
||||
if not mappings:
|
||||
int_port = self.get_port(context, int_port_id)
|
||||
int_network = self.get_network(context, int_port['network_id'])
|
||||
if az_ext.AZ_HINTS not in int_network:
|
||||
raise Exception('Cross pods L3 networking not support')
|
||||
self._validate_availability_zones(
|
||||
context, int_network[az_ext.AZ_HINTS], False)
|
||||
int_net_pod, _ = az_ag.get_pod_by_az_tenant(
|
||||
t_ctx, int_network[az_ext.AZ_HINTS][0], project_id)
|
||||
b_int_net_id = db_api.get_bottom_id_by_top_id_pod_name(
|
||||
t_ctx, int_network['id'], int_net_pod['pod_name'],
|
||||
t_constants.RT_NETWORK)
|
||||
b_int_port_body = {
|
||||
'port': {
|
||||
'tenant_id': project_id,
|
||||
'admin_state_up': True,
|
||||
'name': int_port['id'],
|
||||
'network_id': b_int_net_id,
|
||||
'mac_address': int_port['mac_address'],
|
||||
'fixed_ips': [{'ip_address': int_port['fixed_ips'][0][
|
||||
'ip_address']}]
|
||||
}
|
||||
}
|
||||
# TODO(zhiyuan) handle DHCP port ip address conflict problem
|
||||
_, b_int_port_id = self._prepare_bottom_element(
|
||||
t_ctx, project_id, int_net_pod, int_port,
|
||||
t_constants.RT_PORT, b_int_port_body)
|
||||
else:
|
||||
int_net_pod, b_int_port_id = mappings[0]
|
||||
ext_net_id = floatingip_db['floating_network_id']
|
||||
ext_net = self.get_network(context, ext_net_id)
|
||||
ext_net_pod = db_api.get_pod_by_name(t_ctx,
|
||||
ext_net[az_ext.AZ_HINTS][0])
|
||||
|
||||
# external network and internal network are in the same pod, no
|
||||
# need to use bridge network.
|
||||
if int_net_pod['pod_name'] == ext_net_pod['pod_name']:
|
||||
client = self._get_client(int_net_pod['pod_name'])
|
||||
b_ext_net_id = db_api.get_bottom_id_by_top_id_pod_name(
|
||||
t_ctx, ext_net_id, ext_net_pod['pod_name'],
|
||||
t_constants.RT_NETWORK)
|
||||
self._safe_create_bottom_floatingip(
|
||||
t_ctx, client, b_ext_net_id, fip_address, b_int_port_id)
|
||||
|
||||
return res
|
||||
|
||||
# below handle the case that external network and internal network
|
||||
# are in different pods
|
||||
int_client = self._get_client(int_net_pod['pod_name'])
|
||||
ext_client = self._get_client(ext_net_pod['pod_name'])
|
||||
ns_bridge_net_name = t_constants.ns_bridge_net_name % project_id
|
||||
ns_bridge_net = self.get_networks(
|
||||
context, {'name': [ns_bridge_net_name]})[0]
|
||||
int_bridge_net_id = db_api.get_bottom_id_by_top_id_pod_name(
|
||||
t_ctx, ns_bridge_net['id'], int_net_pod['pod_name'],
|
||||
t_constants.RT_NETWORK)
|
||||
ext_bridge_net_id = db_api.get_bottom_id_by_top_id_pod_name(
|
||||
t_ctx, ns_bridge_net['id'], ext_net_pod['pod_name'],
|
||||
t_constants.RT_NETWORK)
|
||||
|
||||
t_pod = db_api.get_top_pod(t_ctx)
|
||||
t_ns_bridge_port = self._get_bridge_interface(
|
||||
t_ctx, context, project_id, t_pod, ns_bridge_net['id'],
|
||||
None, b_int_port_id, False)
|
||||
port_body = {
|
||||
'port': {
|
||||
'tenant_id': project_id,
|
||||
'admin_state_up': True,
|
||||
'name': 'ns_bridge_port',
|
||||
'network_id': ext_bridge_net_id,
|
||||
'fixed_ips': [{'ip_address': t_ns_bridge_port[
|
||||
'fixed_ips'][0]['ip_address']}]
|
||||
}
|
||||
}
|
||||
_, b_ns_bridge_port_id = self._prepare_bottom_element(
|
||||
t_ctx, project_id, ext_net_pod, t_ns_bridge_port,
|
||||
t_constants.RT_PORT, port_body)
|
||||
b_ext_net_id = db_api.get_bottom_id_by_top_id_pod_name(
|
||||
t_ctx, ext_net_id, ext_net_pod['pod_name'],
|
||||
t_constants.RT_NETWORK)
|
||||
self._safe_create_bottom_floatingip(
|
||||
t_ctx, ext_client, b_ext_net_id, fip_address,
|
||||
b_ns_bridge_port_id)
|
||||
self._safe_create_bottom_floatingip(
|
||||
t_ctx, int_client, int_bridge_net_id,
|
||||
t_ns_bridge_port['fixed_ips'][0]['ip_address'], b_int_port_id)
|
||||
|
||||
return res
|
||||
except Exception:
|
||||
# NOTE(zhiyuan) currently we just handle floating ip association
|
||||
# in this function, so when exception occurs, we update floating
|
||||
# ip object to unset fixed_port_id, fixed_ip_address, router_id
|
||||
self._disassociate_floatingip(context, _id)
|
||||
raise
|
||||
|
@ -253,17 +253,14 @@ class ServerController(rest.RestController):
|
||||
return bottom_port_id
|
||||
|
||||
def _handle_port(self, context, pod, port):
|
||||
mappings = db_api.get_bottom_mappings_by_top_id(context, port['id'],
|
||||
constants.RT_PORT)
|
||||
if mappings:
|
||||
# TODO(zhiyuan) judge return or raise exception
|
||||
# NOTE(zhiyuan) user provides a port that already has mapped
|
||||
# bottom port, return bottom id or raise an exception?
|
||||
return mappings[0][1]
|
||||
top_client = self._get_client()
|
||||
# NOTE(zhiyuan) at this moment, bottom port has not been created,
|
||||
# neutron plugin directly retrieves information from top, so the
|
||||
# network id and subnet id in this port dict are safe to use
|
||||
# NOTE(zhiyuan) at this moment, it is possible that the bottom port has
|
||||
# been created. if user creates a port and associate it with a floating
|
||||
# ip before booting a vm, tricircle plugin will create the bottom port
|
||||
# first in order to setup floating ip in bottom pod. but it is still
|
||||
# safe for us to use network id and subnet id in the returned port dict
|
||||
# since tricircle plugin will do id mapping and guarantee ids in the
|
||||
# dict are top id.
|
||||
net = top_client.get_networks(context, port['network_id'])
|
||||
subnets = []
|
||||
for fixed_ip in port['fixed_ips']:
|
||||
@ -283,6 +280,20 @@ class ServerController(rest.RestController):
|
||||
body[field] = origin[field]
|
||||
return body
|
||||
|
||||
@staticmethod
|
||||
def _remove_fip_info(servers):
|
||||
for server in servers:
|
||||
if 'addresses' not in server:
|
||||
continue
|
||||
for addresses in server['addresses'].values():
|
||||
remove_index = -1
|
||||
for i, address in enumerate(addresses):
|
||||
if address.get('OS-EXT-IPS:type') == 'floating':
|
||||
remove_index = i
|
||||
break
|
||||
if remove_index >= 0:
|
||||
del addresses[remove_index]
|
||||
|
||||
def _get_all(self, context):
|
||||
ret = []
|
||||
pods = db_api.list_pods(context)
|
||||
@ -290,7 +301,9 @@ class ServerController(rest.RestController):
|
||||
if not pod['az_name']:
|
||||
continue
|
||||
client = self._get_client(pod['pod_name'])
|
||||
ret.extend(client.list_servers(context))
|
||||
servers = client.list_servers(context)
|
||||
self._remove_fip_info(servers)
|
||||
ret.extend(servers)
|
||||
return ret
|
||||
|
||||
@expose(generic=True, template='json')
|
||||
|
@ -13,8 +13,16 @@
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
from neutron.common import config
|
||||
from oslo_config import cfg
|
||||
from oslotest import base
|
||||
|
||||
|
||||
class TestCase(base.BaseTestCase):
|
||||
"""Test case base class for all unit tests."""
|
||||
def setUp(self):
|
||||
# neutron has a configuration option "bind_port" which conflicts with
|
||||
# tricircle configuration option, so unregister this option before
|
||||
# running tricircle tests
|
||||
cfg.CONF.unregister_opts(config.core_opts)
|
||||
super(TestCase, self).setUp()
|
||||
|
0
tricircle/tests/unit/network/__init__.py
Normal file
0
tricircle/tests/unit/network/__init__.py
Normal file
@ -20,19 +20,24 @@ from mock import patch
|
||||
import unittest
|
||||
|
||||
from sqlalchemy.orm import exc
|
||||
from sqlalchemy.sql import elements
|
||||
|
||||
import neutron.common.config as q_config
|
||||
from neutron.db import db_base_plugin_common
|
||||
from neutron.db import db_base_plugin_v2
|
||||
from neutron.db import ipam_non_pluggable_backend
|
||||
from neutron.db import l3_db
|
||||
from neutron.extensions import availability_zone as az_ext
|
||||
from neutron.ipam import subnet_alloc
|
||||
import neutronclient.common.exceptions as q_exceptions
|
||||
|
||||
from oslo_config import cfg
|
||||
from oslo_serialization import jsonutils
|
||||
from oslo_utils import uuidutils
|
||||
|
||||
from tricircle.common import constants
|
||||
from tricircle.common import context
|
||||
from tricircle.common import exceptions
|
||||
import tricircle.db.api as db_api
|
||||
from tricircle.db import core
|
||||
from tricircle.db import models
|
||||
@ -49,6 +54,8 @@ TOP_SUBNETPOOLPREFIXES = []
|
||||
TOP_IPALLOCATIONS = []
|
||||
TOP_VLANALLOCATIONS = []
|
||||
TOP_SEGMENTS = []
|
||||
TOP_EXTNETS = []
|
||||
TOP_FLOATINGIPS = []
|
||||
BOTTOM1_NETS = []
|
||||
BOTTOM1_SUBNETS = []
|
||||
BOTTOM1_PORTS = []
|
||||
@ -59,7 +66,7 @@ BOTTOM2_PORTS = []
|
||||
BOTTOM2_ROUTERS = []
|
||||
RES_LIST = [TOP_NETS, TOP_SUBNETS, TOP_PORTS, TOP_ROUTERS, TOP_ROUTERPORT,
|
||||
TOP_SUBNETPOOLS, TOP_SUBNETPOOLPREFIXES, TOP_IPALLOCATIONS,
|
||||
TOP_VLANALLOCATIONS, TOP_SEGMENTS,
|
||||
TOP_VLANALLOCATIONS, TOP_SEGMENTS, TOP_EXTNETS, TOP_FLOATINGIPS,
|
||||
BOTTOM1_NETS, BOTTOM1_SUBNETS, BOTTOM1_PORTS, BOTTOM1_ROUTERS,
|
||||
BOTTOM2_NETS, BOTTOM2_SUBNETS, BOTTOM2_PORTS, BOTTOM2_ROUTERS]
|
||||
RES_MAP = {'networks': TOP_NETS,
|
||||
@ -71,7 +78,9 @@ RES_MAP = {'networks': TOP_NETS,
|
||||
'subnetpools': TOP_SUBNETPOOLS,
|
||||
'subnetpoolprefixes': TOP_SUBNETPOOLPREFIXES,
|
||||
'ml2_vlan_allocations': TOP_VLANALLOCATIONS,
|
||||
'ml2_network_segments': TOP_SEGMENTS}
|
||||
'ml2_network_segments': TOP_SEGMENTS,
|
||||
'externalnetworks': TOP_EXTNETS,
|
||||
'floatingips': TOP_FLOATINGIPS}
|
||||
|
||||
|
||||
class DotDict(dict):
|
||||
@ -151,6 +160,15 @@ class FakeClient(object):
|
||||
def get_native_client(self, resource, ctx):
|
||||
return self.client
|
||||
|
||||
def _allocate_ip(self, port_body):
|
||||
subnet_list = self._res_map[self.pod_name]['subnet']
|
||||
for subnet in subnet_list:
|
||||
if subnet['network_id'] == port_body['port']['network_id']:
|
||||
cidr = subnet['cidr']
|
||||
ip = cidr[:cidr.rindex('.')] + '.5'
|
||||
return {'subnet_id': subnet['id'],
|
||||
'ip_address': ip}
|
||||
|
||||
def create_resources(self, _type, ctx, body):
|
||||
if _type == 'port':
|
||||
res_list = self._res_map[self.pod_name][_type]
|
||||
@ -164,11 +182,17 @@ class FakeClient(object):
|
||||
fixed_ip['ip_address'])
|
||||
fixed_ips = body[_type].get('fixed_ips', [])
|
||||
for fixed_ip in fixed_ips:
|
||||
# just skip ip address check when subnet_id not given
|
||||
# currently test case doesn't need to cover such situation
|
||||
if 'subnet_id' not in fixed_ip:
|
||||
continue
|
||||
if fixed_ip['ip_address'] in subnet_ips_map.get(
|
||||
fixed_ip['subnet_id'], set()):
|
||||
raise q_exceptions.IpAddressInUseClient()
|
||||
if 'device_id' not in body[_type]:
|
||||
body[_type]['device_id'] = ''
|
||||
if 'fixed_ips' not in body[_type]:
|
||||
body[_type]['fixed_ips'] = [self._allocate_ip(body)]
|
||||
if 'id' not in body[_type]:
|
||||
body[_type]['id'] = uuidutils.generate_uuid()
|
||||
res_list = self._res_map[self.pod_name][_type]
|
||||
@ -176,6 +200,9 @@ class FakeClient(object):
|
||||
res_list.append(res)
|
||||
return res
|
||||
|
||||
def create_ports(self, ctx, body):
|
||||
return self.create_resources('port', ctx, body)
|
||||
|
||||
def list_ports(self, ctx, filters=None):
|
||||
filter_dict = {}
|
||||
filters = filters or []
|
||||
@ -197,7 +224,22 @@ class FakeClient(object):
|
||||
if index != -1:
|
||||
del self._res_map[self.pod_name]['port'][index]
|
||||
|
||||
def add_gateway_routers(self, ctx, *args, **kwargs):
|
||||
# only for mock purpose
|
||||
pass
|
||||
|
||||
def add_interface_routers(self, ctx, *args, **kwargs):
|
||||
# only for mock purpose
|
||||
pass
|
||||
|
||||
def action_routers(self, ctx, action, *args, **kwargs):
|
||||
# divide into two functions for test purpose
|
||||
if action == 'add_interface':
|
||||
return self.add_interface_routers(ctx, args, kwargs)
|
||||
elif action == 'add_gateway':
|
||||
return self.add_gateway_routers(ctx, args, kwargs)
|
||||
|
||||
def create_floatingips(self, ctx, body):
|
||||
# only for mock purpose
|
||||
pass
|
||||
|
||||
@ -292,14 +334,30 @@ class FakeQuery(object):
|
||||
return FakeQuery(filtered_list, self.table)
|
||||
|
||||
def filter(self, *criteria):
|
||||
if hasattr(criteria[0].right, 'value'):
|
||||
keys = [e.left.name for e in criteria]
|
||||
values = [e.right.value for e in criteria]
|
||||
_filter = []
|
||||
keys = []
|
||||
values = []
|
||||
for e in criteria:
|
||||
if not isinstance(e.right, elements.Null):
|
||||
_filter.append(e)
|
||||
else:
|
||||
if e.left.name == 'network_id' and (
|
||||
e.expression.operator.__name__ == 'isnot'):
|
||||
keys.append('router:external')
|
||||
values.append(True)
|
||||
if not _filter:
|
||||
if not keys:
|
||||
return FakeQuery(self.records, self.table)
|
||||
else:
|
||||
return self._handle_filter(keys, values)
|
||||
if hasattr(_filter[0].right, 'value'):
|
||||
keys.extend([e.left.name for e in _filter])
|
||||
values.extend([e.right.value for e in _filter])
|
||||
else:
|
||||
keys = [e.expression.left.name for e in criteria]
|
||||
values = [
|
||||
e.expression.right.element.clauses[0].value for e in criteria]
|
||||
if criteria[0].expression.operator.__name__ == 'lt':
|
||||
keys.extend([e.expression.left.name for e in _filter])
|
||||
values.extend(
|
||||
[e.expression.right.element.clauses[0].value for e in _filter])
|
||||
if _filter[0].expression.operator.__name__ == 'lt':
|
||||
return self._handle_pagination_by_id(values[0])
|
||||
else:
|
||||
return self._handle_filter(keys, values)
|
||||
@ -376,7 +434,11 @@ class FakeSession(object):
|
||||
def __exit__(self, type, value, traceback):
|
||||
pass
|
||||
|
||||
def begin(self, subtransactions=False):
|
||||
@property
|
||||
def is_active(self):
|
||||
return True
|
||||
|
||||
def begin(self, subtransactions=False, nested=True):
|
||||
return FakeSession.WithWrapper()
|
||||
|
||||
def begin_nested(self):
|
||||
@ -414,6 +476,11 @@ class FakeSession(object):
|
||||
model_dict['port'] = port
|
||||
port.update(model_dict)
|
||||
break
|
||||
if model_obj.__tablename__ == 'externalnetworks':
|
||||
for net in TOP_NETS:
|
||||
if net['id'] == model_dict['network_id']:
|
||||
net['external'] = True
|
||||
break
|
||||
link_models(model_obj, model_dict,
|
||||
'routerports', 'router_id',
|
||||
'routers', 'id', 'attached_ports')
|
||||
@ -456,20 +523,42 @@ class FakePlugin(plugin.TricirclePlugin):
|
||||
phynet = 'bridge'
|
||||
cfg.CONF.set_override('bridge_physical_network', phynet,
|
||||
group='tricircle')
|
||||
TOP_VLANALLOCATIONS.append(
|
||||
DotDict({'physical_network': phynet,
|
||||
'vlan_id': 2000, 'allocated': False}))
|
||||
for vlan in (2000, 2001):
|
||||
TOP_VLANALLOCATIONS.append(
|
||||
DotDict({'physical_network': phynet,
|
||||
'vlan_id': vlan, 'allocated': False}))
|
||||
|
||||
def _get_client(self, pod_name):
|
||||
return FakeClient(pod_name)
|
||||
|
||||
def _make_network_dict(self, network, fields=None,
|
||||
process_extensions=True, context=None):
|
||||
az_hints_key = 'availability_zone_hints'
|
||||
if az_hints_key in network:
|
||||
ret = DotDict(network)
|
||||
az_str = network[az_hints_key]
|
||||
ret[az_hints_key] = jsonutils.loads(az_str) if az_str else []
|
||||
return ret
|
||||
return network
|
||||
|
||||
def _make_subnet_dict(self, subnet, fields=None, context=None):
|
||||
return subnet
|
||||
|
||||
def _make_port_dict(self, port, fields=None, process_extensions=True):
|
||||
if port['fixed_ips']:
|
||||
return port
|
||||
for allocation in TOP_IPALLOCATIONS:
|
||||
if allocation['port_id'] == port['id']:
|
||||
ret = {}
|
||||
for key, value in port.iteritems():
|
||||
if key == 'fixed_ips':
|
||||
ret[key] = [{'subnet_id': allocation['subnet_id'],
|
||||
'ip_address': allocation['ip_address']}]
|
||||
else:
|
||||
ret[key] = value
|
||||
return ret
|
||||
return port
|
||||
|
||||
|
||||
def fake_get_context_from_neutron_context(q_context):
|
||||
return context.get_db_context()
|
||||
@ -488,6 +577,10 @@ def fake_make_subnet_dict(self, subnet, fields=None, context=None):
|
||||
return subnet
|
||||
|
||||
|
||||
def fake_make_router_dict(self, router, fields=None, process_extensions=True):
|
||||
return router
|
||||
|
||||
|
||||
@staticmethod
|
||||
def fake_generate_ip(context, subnets):
|
||||
suffix = 1
|
||||
@ -512,6 +605,7 @@ class PluginTest(unittest.TestCase):
|
||||
def setUp(self):
|
||||
core.initialize()
|
||||
core.ModelBase.metadata.create_all(core.get_engine())
|
||||
cfg.CONF.register_opts(q_config.core_opts)
|
||||
self.context = context.Context()
|
||||
|
||||
def _basic_pod_route_setup(self):
|
||||
@ -708,7 +802,7 @@ class PluginTest(unittest.TestCase):
|
||||
mock_context.return_value = tricircle_context
|
||||
|
||||
network = {'network': {
|
||||
'id': 'net_id', 'name': 'net_az',
|
||||
'id': 'net_id', 'name': 'net_az', 'tenant_id': 'test_tenant_id',
|
||||
'admin_state_up': True, 'shared': False,
|
||||
'availability_zone_hints': ['az_name_1', 'az_name_2']}}
|
||||
fake_plugin.create_network(neutron_context, network)
|
||||
@ -736,11 +830,12 @@ class PluginTest(unittest.TestCase):
|
||||
|
||||
# test _prepare_top_element
|
||||
pool_id = fake_plugin._get_bridge_subnet_pool_id(
|
||||
t_ctx, q_ctx, 'project_id', t_pod)
|
||||
t_ctx, q_ctx, 'project_id', t_pod, True)
|
||||
net, subnet = fake_plugin._get_bridge_network_subnet(
|
||||
t_ctx, q_ctx, 'project_id', t_pod, pool_id)
|
||||
port = fake_plugin._get_bridge_interface(
|
||||
t_ctx, q_ctx, 'project_id', pod, net['id'], 'b_router_id')
|
||||
t_ctx, q_ctx, 'project_id', t_pod, pool_id, True)
|
||||
port = fake_plugin._get_bridge_interface(t_ctx, q_ctx, 'project_id',
|
||||
pod, net['id'], 'b_router_id',
|
||||
None, True)
|
||||
|
||||
top_entry_map = {}
|
||||
with t_ctx.session.begin():
|
||||
@ -757,8 +852,8 @@ class PluginTest(unittest.TestCase):
|
||||
self.assertEqual(top_entry_map['port']['bottom_id'], port['id'])
|
||||
|
||||
# test _prepare_bottom_element
|
||||
_, b_port_id = fake_plugin._get_bottom_bridge_elements(
|
||||
q_ctx, 'project_id', b_pod, net, subnet, port)
|
||||
_, b_port_id, _, _ = fake_plugin._get_bottom_bridge_elements(
|
||||
q_ctx, 'project_id', b_pod, net, False, subnet, port)
|
||||
b_port = fake_plugin._get_client(b_pod['pod_name']).get_ports(
|
||||
t_ctx, b_port_id)
|
||||
|
||||
@ -786,7 +881,7 @@ class PluginTest(unittest.TestCase):
|
||||
t_net = {
|
||||
'id': t_net_id,
|
||||
'name': 'top_net',
|
||||
'availability_zone_hints': ['az_name_1'],
|
||||
'availability_zone_hints': '["az_name_1"]',
|
||||
'tenant_id': tenant_id
|
||||
}
|
||||
t_subnet = {
|
||||
@ -857,14 +952,16 @@ class PluginTest(unittest.TestCase):
|
||||
mock_rpc.assert_called_once_with(t_ctx, t_router_id)
|
||||
for b_net in BOTTOM1_NETS:
|
||||
if 'provider:segmentation_id' in b_net:
|
||||
self.assertEqual(TOP_VLANALLOCATIONS[0]['vlan_id'],
|
||||
b_net['provider:segmentation_id'])
|
||||
self.assertEqual(True, TOP_VLANALLOCATIONS[0]['allocated'])
|
||||
self.assertEqual(TOP_VLANALLOCATIONS[0]['vlan_id'],
|
||||
TOP_SEGMENTS[0]['segmentation_id'])
|
||||
self.assertIn(b_net['provider:segmentation_id'], (2000, 2001))
|
||||
# only one VLAN allocated, for E-W bridge network
|
||||
allocations = [
|
||||
allocation['allocated'] for allocation in TOP_VLANALLOCATIONS]
|
||||
self.assertItemsEqual([True, False], allocations)
|
||||
for segment in TOP_SEGMENTS:
|
||||
self.assertIn(segment['segmentation_id'], (2000, 2001))
|
||||
|
||||
bridge_port_name = constants.bridge_port_name % (tenant_id,
|
||||
b_router_id)
|
||||
bridge_port_name = constants.ew_bridge_port_name % (tenant_id,
|
||||
b_router_id)
|
||||
_, t_bridge_port_id = db_api.get_bottom_mappings_by_top_id(
|
||||
t_ctx, bridge_port_name, 'port')[0]
|
||||
_, b_bridge_port_id = db_api.get_bottom_mappings_by_top_id(
|
||||
@ -875,7 +972,7 @@ class PluginTest(unittest.TestCase):
|
||||
t_net = {
|
||||
'id': t_net_id,
|
||||
'name': 'another_top_net',
|
||||
'availability_zone_hints': ['az_name_1'],
|
||||
'availability_zone_hints': '["az_name_1"]',
|
||||
'tenant_id': tenant_id
|
||||
}
|
||||
t_subnet = {
|
||||
@ -906,6 +1003,12 @@ class PluginTest(unittest.TestCase):
|
||||
another_b_port = fake_plugin._get_client('pod_1').get_ports(
|
||||
q_ctx, another_b_port_id)
|
||||
|
||||
t_ns_bridge_net_id = None
|
||||
for net in TOP_NETS:
|
||||
if net['name'].startswith('ns_bridge'):
|
||||
t_ns_bridge_net_id = net['id']
|
||||
# N-S bridge not created since no extenal network created
|
||||
self.assertIsNone(t_ns_bridge_net_id)
|
||||
calls = [mock.call(t_ctx, 'add_interface', b_router_id,
|
||||
{'port_id': b_bridge_port_id}),
|
||||
mock.call(t_ctx, 'add_interface', b_router_id,
|
||||
@ -915,6 +1018,190 @@ class PluginTest(unittest.TestCase):
|
||||
mock_action.assert_has_calls(calls)
|
||||
self.assertEqual(mock_action.call_count, 3)
|
||||
|
||||
@patch.object(ipam_non_pluggable_backend.IpamNonPluggableBackend,
|
||||
'_allocate_specific_ip', new=_allocate_specific_ip)
|
||||
@patch.object(ipam_non_pluggable_backend.IpamNonPluggableBackend,
|
||||
'_generate_ip', new=fake_generate_ip)
|
||||
@patch.object(db_base_plugin_common.DbBasePluginCommon,
|
||||
'_make_subnet_dict', new=fake_make_subnet_dict)
|
||||
@patch.object(subnet_alloc.SubnetAllocator, '_lock_subnetpool',
|
||||
new=mock.Mock)
|
||||
@patch.object(FakeRPCAPI, 'configure_extra_routes')
|
||||
@patch.object(FakeClient, 'action_routers')
|
||||
@patch.object(context, 'get_context_from_neutron_context')
|
||||
def test_add_interface_with_external_network(self, mock_context,
|
||||
mock_action, mock_rpc):
|
||||
self._basic_pod_route_setup()
|
||||
|
||||
fake_plugin = FakePlugin()
|
||||
q_ctx = FakeNeutronContext()
|
||||
t_ctx = context.get_db_context()
|
||||
mock_context.return_value = t_ctx
|
||||
|
||||
tenant_id = 'test_tenant_id'
|
||||
t_net_id, t_subnet_id, t_router_id = self._prepare_router_test(
|
||||
tenant_id)
|
||||
|
||||
e_net_id = uuidutils.generate_uuid()
|
||||
e_net = {'id': e_net_id,
|
||||
'name': 'ext-net',
|
||||
'admin_state_up': True,
|
||||
'shared': False,
|
||||
'tenant_id': tenant_id,
|
||||
'router:external': True,
|
||||
'availability_zone_hints': '["pod_2"]'}
|
||||
TOP_NETS.append(e_net)
|
||||
|
||||
t_port_id = fake_plugin.add_router_interface(
|
||||
q_ctx, t_router_id, {'subnet_id': t_subnet_id})['port_id']
|
||||
_, b_port_id = db_api.get_bottom_mappings_by_top_id(
|
||||
t_ctx, t_port_id, 'port')[0]
|
||||
b_port = fake_plugin._get_client('pod_1').get_ports(q_ctx, b_port_id)
|
||||
b_net_id = b_port['network_id']
|
||||
b_subnet_id = b_port['fixed_ips'][0]['subnet_id']
|
||||
_, map_net_id = db_api.get_bottom_mappings_by_top_id(
|
||||
t_ctx, t_net_id, 'network')[0]
|
||||
_, map_subnet_id = db_api.get_bottom_mappings_by_top_id(
|
||||
t_ctx, t_subnet_id, 'subnet')[0]
|
||||
_, b_router_id = db_api.get_bottom_mappings_by_top_id(
|
||||
t_ctx, t_router_id, 'router')[0]
|
||||
|
||||
self.assertEqual(b_net_id, map_net_id)
|
||||
self.assertEqual(b_subnet_id, map_subnet_id)
|
||||
mock_rpc.assert_called_once_with(t_ctx, t_router_id)
|
||||
for b_net in BOTTOM1_NETS:
|
||||
if 'provider:segmentation_id' in b_net:
|
||||
self.assertIn(b_net['provider:segmentation_id'], (2000, 2001))
|
||||
# two VLANs allocated, for E-W and N-S bridge network
|
||||
allocations = [
|
||||
allocation['allocated'] for allocation in TOP_VLANALLOCATIONS]
|
||||
self.assertItemsEqual([True, True], allocations)
|
||||
for segment in TOP_SEGMENTS:
|
||||
self.assertIn(segment['segmentation_id'], (2000, 2001))
|
||||
|
||||
bridge_port_name = constants.ew_bridge_port_name % (tenant_id,
|
||||
b_router_id)
|
||||
_, t_bridge_port_id = db_api.get_bottom_mappings_by_top_id(
|
||||
t_ctx, bridge_port_name, 'port')[0]
|
||||
_, b_bridge_port_id = db_api.get_bottom_mappings_by_top_id(
|
||||
t_ctx, t_bridge_port_id, 'port')[0]
|
||||
|
||||
t_net_id = uuidutils.generate_uuid()
|
||||
t_subnet_id = uuidutils.generate_uuid()
|
||||
t_net = {
|
||||
'id': t_net_id,
|
||||
'name': 'another_top_net',
|
||||
'availability_zone_hints': '["az_name_1"]',
|
||||
'tenant_id': tenant_id
|
||||
}
|
||||
t_subnet = {
|
||||
'id': t_subnet_id,
|
||||
'network_id': t_net_id,
|
||||
'name': 'another_top_subnet',
|
||||
'ip_version': 4,
|
||||
'cidr': '10.0.1.0/24',
|
||||
'allocation_pools': [],
|
||||
'enable_dhcp': True,
|
||||
'gateway_ip': '10.0.1.1',
|
||||
'ipv6_address_mode': '',
|
||||
'ipv6_ra_mode': '',
|
||||
'tenant_id': tenant_id
|
||||
}
|
||||
TOP_NETS.append(DotDict(t_net))
|
||||
TOP_SUBNETS.append(DotDict(t_subnet))
|
||||
|
||||
# action_routers is mocked, manually add device_id
|
||||
for port in BOTTOM1_PORTS:
|
||||
if port['id'] == b_bridge_port_id:
|
||||
port['device_id'] = b_router_id
|
||||
|
||||
another_t_port_id = fake_plugin.add_router_interface(
|
||||
q_ctx, t_router_id, {'subnet_id': t_subnet_id})['port_id']
|
||||
_, another_b_port_id = db_api.get_bottom_mappings_by_top_id(
|
||||
t_ctx, another_t_port_id, 'port')[0]
|
||||
|
||||
for net in TOP_NETS:
|
||||
if net['name'].startswith('ns_bridge'):
|
||||
t_ns_bridge_net_id = net['id']
|
||||
for subnet in TOP_SUBNETS:
|
||||
if subnet['name'].startswith('ns_bridge'):
|
||||
t_ns_bridge_subnet_id = subnet['id']
|
||||
b_ns_bridge_net_id = db_api.get_bottom_id_by_top_id_pod_name(
|
||||
t_ctx, t_ns_bridge_net_id, 'pod_1', constants.RT_NETWORK)
|
||||
b_ns_bridge_subnet_id = db_api.get_bottom_id_by_top_id_pod_name(
|
||||
t_ctx, t_ns_bridge_subnet_id, 'pod_1', constants.RT_SUBNET)
|
||||
# internal network and external network are in different pods, need
|
||||
# to create N-S bridge network and set gateway, add_router_interface
|
||||
# is called two times, so add_gateway is also called two times.
|
||||
# add_interface is called three times because the second time
|
||||
# add_router_interface is called, bottom router is already attached
|
||||
# to E-W bridge network, only need to attach internal network to
|
||||
# bottom router
|
||||
calls = [mock.call(t_ctx, 'add_gateway', b_router_id,
|
||||
{'network_id': b_ns_bridge_net_id,
|
||||
'external_fixed_ips': [
|
||||
{'subnet_id': b_ns_bridge_subnet_id,
|
||||
'ip_address': '100.128.0.2'}]}),
|
||||
mock.call(t_ctx, 'add_interface', b_router_id,
|
||||
{'port_id': b_bridge_port_id}),
|
||||
mock.call(t_ctx, 'add_interface', b_router_id,
|
||||
{'port_id': b_port['id']}),
|
||||
mock.call(t_ctx, 'add_gateway', b_router_id,
|
||||
{'network_id': b_ns_bridge_net_id,
|
||||
'external_fixed_ips': [
|
||||
{'subnet_id': b_ns_bridge_subnet_id,
|
||||
'ip_address': '100.128.0.2'}]}),
|
||||
mock.call(t_ctx, 'add_interface', b_router_id,
|
||||
{'port_id': another_b_port_id})]
|
||||
mock_action.assert_has_calls(calls)
|
||||
|
||||
t_net_id = uuidutils.generate_uuid()
|
||||
t_subnet_id = uuidutils.generate_uuid()
|
||||
t_net = {
|
||||
'id': t_net_id,
|
||||
'name': 'another_top_net',
|
||||
'availability_zone_hints': '["az_name_2"]',
|
||||
'tenant_id': tenant_id
|
||||
}
|
||||
t_subnet = {
|
||||
'id': t_subnet_id,
|
||||
'network_id': t_net_id,
|
||||
'name': 'another_top_subnet',
|
||||
'ip_version': 4,
|
||||
'cidr': '10.0.2.0/24',
|
||||
'allocation_pools': [],
|
||||
'enable_dhcp': True,
|
||||
'gateway_ip': '10.0.2.1',
|
||||
'ipv6_address_mode': '',
|
||||
'ipv6_ra_mode': '',
|
||||
'tenant_id': tenant_id
|
||||
}
|
||||
TOP_NETS.append(DotDict(t_net))
|
||||
TOP_SUBNETS.append(DotDict(t_subnet))
|
||||
another_t_port_id = fake_plugin.add_router_interface(
|
||||
q_ctx, t_router_id, {'subnet_id': t_subnet_id})['port_id']
|
||||
_, another_b_port_id = db_api.get_bottom_mappings_by_top_id(
|
||||
t_ctx, another_t_port_id, 'port')[0]
|
||||
b_router_id = db_api.get_bottom_id_by_top_id_pod_name(
|
||||
t_ctx, t_router_id, 'pod_2', 'router')
|
||||
bridge_port_name = constants.ew_bridge_port_name % (tenant_id,
|
||||
b_router_id)
|
||||
_, t_bridge_port_id = db_api.get_bottom_mappings_by_top_id(
|
||||
t_ctx, bridge_port_name, 'port')[0]
|
||||
_, b_bridge_port_id = db_api.get_bottom_mappings_by_top_id(
|
||||
t_ctx, t_bridge_port_id, 'port')[0]
|
||||
# internal network and external network are in the same pod, no need
|
||||
# to create N-S bridge network when attaching router interface(N-S
|
||||
# bridge network is created when setting router external gateway), so
|
||||
# add_gateway is not called.
|
||||
calls = [mock.call(t_ctx, 'add_interface', b_router_id,
|
||||
{'port_id': b_bridge_port_id}),
|
||||
mock.call(t_ctx, 'add_interface', b_router_id,
|
||||
{'port_id': another_b_port_id})]
|
||||
mock_action.assert_has_calls(calls)
|
||||
# all together 7 times calling
|
||||
self.assertEqual(mock_action.call_count, 7)
|
||||
|
||||
@patch.object(ipam_non_pluggable_backend.IpamNonPluggableBackend,
|
||||
'_allocate_specific_ip', new=_allocate_specific_ip)
|
||||
@patch.object(ipam_non_pluggable_backend.IpamNonPluggableBackend,
|
||||
@ -985,7 +1272,7 @@ class PluginTest(unittest.TestCase):
|
||||
new=mock.Mock)
|
||||
@patch.object(FakeRPCAPI, 'configure_extra_routes', new=mock.Mock)
|
||||
@patch.object(FakeClient, 'delete_ports')
|
||||
@patch.object(FakeClient, 'action_routers')
|
||||
@patch.object(FakeClient, 'add_interface_routers')
|
||||
@patch.object(context, 'get_context_from_neutron_context')
|
||||
def test_add_interface_exception_port_left(self, mock_context,
|
||||
mock_action, mock_delete):
|
||||
@ -1016,7 +1303,376 @@ class PluginTest(unittest.TestCase):
|
||||
# bottom interface and bridge port
|
||||
self.assertEqual(2, len(BOTTOM1_PORTS))
|
||||
|
||||
@patch.object(context, 'get_context_from_neutron_context')
|
||||
def test_create_external_network(self, mock_context):
|
||||
self._basic_pod_route_setup()
|
||||
|
||||
fake_plugin = FakePlugin()
|
||||
q_ctx = FakeNeutronContext()
|
||||
t_ctx = context.get_db_context()
|
||||
mock_context.return_value = t_ctx
|
||||
|
||||
# create external network without specifying pod name
|
||||
body = {
|
||||
'network': {
|
||||
'router:external': True,
|
||||
}
|
||||
}
|
||||
self.assertRaises(exceptions.ExternalNetPodNotSpecify,
|
||||
fake_plugin.create_network, q_ctx, body)
|
||||
# create external network specifying az name
|
||||
body = {
|
||||
'network': {
|
||||
'router:external': True,
|
||||
'availability_zone_hints': ['az_name_1']
|
||||
}
|
||||
}
|
||||
self.assertRaises(exceptions.PodNotFound,
|
||||
fake_plugin.create_network, q_ctx, body)
|
||||
body = {
|
||||
'network': {
|
||||
'name': 'ext-net',
|
||||
'admin_state_up': True,
|
||||
'shared': False,
|
||||
'tenant_id': 'test_tenant_id',
|
||||
'router:external': True,
|
||||
'availability_zone_hints': ['pod_1']
|
||||
}
|
||||
}
|
||||
top_net = fake_plugin.create_network(q_ctx, body)
|
||||
for net in BOTTOM1_NETS:
|
||||
if net.get('router:external'):
|
||||
bottom_net = net
|
||||
mappings = db_api.get_bottom_mappings_by_top_id(
|
||||
t_ctx, top_net['id'], constants.RT_NETWORK)
|
||||
self.assertEqual(mappings[0][1], bottom_net['id'])
|
||||
|
||||
@patch.object(ipam_non_pluggable_backend.IpamNonPluggableBackend,
|
||||
'_allocate_specific_ip', new=_allocate_specific_ip)
|
||||
@patch.object(ipam_non_pluggable_backend.IpamNonPluggableBackend,
|
||||
'_generate_ip', new=fake_generate_ip)
|
||||
@patch.object(l3_db.L3_NAT_dbonly_mixin, '_make_router_dict',
|
||||
new=fake_make_router_dict)
|
||||
@patch.object(db_base_plugin_common.DbBasePluginCommon,
|
||||
'_make_subnet_dict', new=fake_make_subnet_dict)
|
||||
@patch.object(subnet_alloc.SubnetAllocator, '_lock_subnetpool',
|
||||
new=mock.Mock)
|
||||
@patch.object(FakeClient, 'action_routers')
|
||||
@patch.object(context, 'get_context_from_neutron_context')
|
||||
def test_set_gateway(self, mock_context, mock_action):
|
||||
plugin_path = 'tricircle.tests.unit.network.test_plugin.FakePlugin'
|
||||
cfg.CONF.set_override('core_plugin', plugin_path)
|
||||
|
||||
self._basic_pod_route_setup()
|
||||
|
||||
fake_plugin = FakePlugin()
|
||||
q_ctx = FakeNeutronContext()
|
||||
t_ctx = context.get_db_context()
|
||||
mock_context.return_value = t_ctx
|
||||
|
||||
tenant_id = 'test_tenant_id'
|
||||
t_net_body = {
|
||||
'name': 'ext_net',
|
||||
'availability_zone_hints': ['pod_1'],
|
||||
'tenant_id': tenant_id,
|
||||
'router:external': True,
|
||||
'admin_state_up': True,
|
||||
'shared': False,
|
||||
}
|
||||
fake_plugin.create_network(q_ctx, {'network': t_net_body})
|
||||
t_net_id = TOP_NETS[0]['id']
|
||||
|
||||
t_subnet_body = {
|
||||
'network_id': t_net_id, # only one network created
|
||||
'name': 'ext_subnet',
|
||||
'ip_version': 4,
|
||||
'cidr': '100.64.0.0/24',
|
||||
'allocation_pools': [],
|
||||
'enable_dhcp': False,
|
||||
'gateway_ip': '100.64.0.1',
|
||||
'dns_nameservers': '',
|
||||
'host_routes': '',
|
||||
'tenant_id': tenant_id
|
||||
}
|
||||
fake_plugin.create_subnet(q_ctx, {'subnet': t_subnet_body})
|
||||
t_subnet_id = TOP_SUBNETS[0]['id']
|
||||
|
||||
t_router_id = uuidutils.generate_uuid()
|
||||
t_router = {
|
||||
'id': t_router_id,
|
||||
'name': 'router',
|
||||
'distributed': False,
|
||||
'tenant_id': tenant_id,
|
||||
'attached_ports': []
|
||||
}
|
||||
|
||||
TOP_ROUTERS.append(DotDict(t_router))
|
||||
fake_plugin.update_router(
|
||||
q_ctx, t_router_id,
|
||||
{'router': {'external_gateway_info': {
|
||||
'network_id': TOP_NETS[0]['id'],
|
||||
'enable_snat': False,
|
||||
'external_fixed_ips': [{'subnet_id': TOP_SUBNETS[0]['id'],
|
||||
'ip_address': '100.64.0.5'}]}}})
|
||||
|
||||
b_router_id = BOTTOM1_ROUTERS[0]['id']
|
||||
b_net_id = db_api.get_bottom_id_by_top_id_pod_name(
|
||||
t_ctx, t_net_id, 'pod_1', constants.RT_NETWORK)
|
||||
b_subnet_id = db_api.get_bottom_id_by_top_id_pod_name(
|
||||
t_ctx, t_subnet_id, 'pod_1', constants.RT_SUBNET)
|
||||
|
||||
for subnet in TOP_SUBNETS:
|
||||
if subnet['name'].startswith('ns_bridge_subnet'):
|
||||
t_ns_bridge_subnet_id = subnet['id']
|
||||
b_ns_bridge_subnet_id = db_api.get_bottom_id_by_top_id_pod_name(
|
||||
t_ctx, t_ns_bridge_subnet_id, 'pod_1', constants.RT_SUBNET)
|
||||
body = {'network_id': b_net_id,
|
||||
'enable_snat': False,
|
||||
'external_fixed_ips': [{'subnet_id': b_subnet_id,
|
||||
'ip_address': '100.64.0.5'}]}
|
||||
calls = [mock.call(t_ctx, 'add_gateway', b_router_id, body),
|
||||
mock.call(t_ctx, 'add_interface', b_router_id,
|
||||
{'subnet_id': b_ns_bridge_subnet_id})]
|
||||
mock_action.assert_has_calls(calls)
|
||||
|
||||
def _prepare_associate_floatingip_test(self, t_ctx, q_ctx, fake_plugin):
|
||||
tenant_id = 'test_tenant_id'
|
||||
self._basic_pod_route_setup()
|
||||
t_net_id, t_subnet_id, t_router_id = self._prepare_router_test(
|
||||
tenant_id)
|
||||
|
||||
net_body = {
|
||||
'name': 'ext_net',
|
||||
'admin_state_up': True,
|
||||
'shared': False,
|
||||
'tenant_id': tenant_id,
|
||||
'router:external': True,
|
||||
'availability_zone_hints': ['pod_2']
|
||||
}
|
||||
e_net = fake_plugin.create_network(q_ctx, {'network': net_body})
|
||||
subnet_body = {
|
||||
'network_id': e_net['id'],
|
||||
'name': 'ext_subnet',
|
||||
'ip_version': 4,
|
||||
'cidr': '100.64.0.0/24',
|
||||
'allocation_pools': [{'start': '100.64.0.2',
|
||||
'end': '100.64.0.254'}],
|
||||
'enable_dhcp': False,
|
||||
'gateway_ip': '100.64.0.1',
|
||||
'dns_nameservers': '',
|
||||
'host_routes': '',
|
||||
'tenant_id': tenant_id
|
||||
}
|
||||
e_subnet = fake_plugin.create_subnet(q_ctx, {'subnet': subnet_body})
|
||||
# set external gateway
|
||||
fake_plugin.update_router(
|
||||
q_ctx, t_router_id,
|
||||
{'router': {'external_gateway_info': {
|
||||
'network_id': e_net['id'],
|
||||
'enable_snat': False,
|
||||
'external_fixed_ips': [{'subnet_id': e_subnet['id'],
|
||||
'ip_address': '100.64.0.5'}]}}})
|
||||
# create floating ip
|
||||
fip_body = {'floating_network_id': e_net['id'],
|
||||
'tenant_id': tenant_id}
|
||||
fip = fake_plugin.create_floatingip(q_ctx, {'floatingip': fip_body})
|
||||
# add router interface
|
||||
fake_plugin.add_router_interface(q_ctx, t_router_id,
|
||||
{'subnet_id': t_subnet_id})
|
||||
# create internal port
|
||||
t_port_id = uuidutils.generate_uuid()
|
||||
b_port_id = uuidutils.generate_uuid()
|
||||
t_port = {
|
||||
'id': t_port_id,
|
||||
'network_id': t_net_id,
|
||||
'mac_address': 'fa:16:3e:96:41:03',
|
||||
'fixed_ips': [{'subnet_id': t_subnet_id,
|
||||
'ip_address': '10.0.0.3'}]
|
||||
}
|
||||
b_port = {
|
||||
'id': b_port_id,
|
||||
'name': t_port_id,
|
||||
'network_id': db_api.get_bottom_id_by_top_id_pod_name(
|
||||
t_ctx, t_net_id, 'pod_1', constants.RT_NETWORK),
|
||||
'mac_address': 'fa:16:3e:96:41:03',
|
||||
'fixed_ips': [
|
||||
{'subnet_id': db_api.get_bottom_id_by_top_id_pod_name(
|
||||
t_ctx, t_subnet_id, 'pod_1', constants.RT_SUBNET),
|
||||
'ip_address': '10.0.0.3'}]
|
||||
}
|
||||
TOP_PORTS.append(t_port)
|
||||
BOTTOM1_PORTS.append(b_port)
|
||||
route = {'top_id': t_port_id,
|
||||
'pod_id': 'pod_id_1',
|
||||
'bottom_id': b_port_id,
|
||||
'resource_type': constants.RT_PORT}
|
||||
with t_ctx.session.begin():
|
||||
core.create_resource(t_ctx, models.ResourceRouting, route)
|
||||
|
||||
return t_port_id, b_port_id, fip, e_net
|
||||
|
||||
@patch.object(ipam_non_pluggable_backend.IpamNonPluggableBackend,
|
||||
'_allocate_specific_ip', new=_allocate_specific_ip)
|
||||
@patch.object(ipam_non_pluggable_backend.IpamNonPluggableBackend,
|
||||
'_generate_ip', new=fake_generate_ip)
|
||||
@patch.object(l3_db.L3_NAT_dbonly_mixin, '_make_router_dict',
|
||||
new=fake_make_router_dict)
|
||||
@patch.object(db_base_plugin_common.DbBasePluginCommon,
|
||||
'_make_subnet_dict', new=fake_make_subnet_dict)
|
||||
@patch.object(subnet_alloc.SubnetAllocator, '_lock_subnetpool',
|
||||
new=mock.Mock)
|
||||
@patch.object(l3_db.L3_NAT_dbonly_mixin, 'update_floatingip',
|
||||
new=mock.Mock)
|
||||
@patch.object(FakeClient, 'create_floatingips')
|
||||
@patch.object(context, 'get_context_from_neutron_context')
|
||||
def test_associate_floatingip(self, mock_context, mock_create):
|
||||
plugin_path = 'tricircle.tests.unit.network.test_plugin.FakePlugin'
|
||||
cfg.CONF.set_override('core_plugin', plugin_path)
|
||||
|
||||
fake_plugin = FakePlugin()
|
||||
q_ctx = FakeNeutronContext()
|
||||
t_ctx = context.get_db_context()
|
||||
mock_context.return_value = t_ctx
|
||||
|
||||
(t_port_id, b_port_id,
|
||||
fip, e_net) = self._prepare_associate_floatingip_test(t_ctx, q_ctx,
|
||||
fake_plugin)
|
||||
|
||||
# associate floating ip
|
||||
fip_body = {'port_id': t_port_id}
|
||||
fake_plugin.update_floatingip(q_ctx, fip['id'],
|
||||
{'floatingip': fip_body})
|
||||
|
||||
b_ext_net_id = db_api.get_bottom_id_by_top_id_pod_name(
|
||||
t_ctx, e_net['id'], 'pod_2', constants.RT_NETWORK)
|
||||
for port in BOTTOM2_PORTS:
|
||||
if port['name'] == 'ns_bridge_port':
|
||||
ns_bridge_port = port
|
||||
for net in TOP_NETS:
|
||||
if net['name'].startswith('ns_bridge'):
|
||||
b_bridge_net_id = db_api.get_bottom_id_by_top_id_pod_name(
|
||||
t_ctx, net['id'], 'pod_1', constants.RT_NETWORK)
|
||||
calls = [mock.call(t_ctx,
|
||||
{'floatingip': {
|
||||
'floating_network_id': b_ext_net_id,
|
||||
'floating_ip_address': fip[
|
||||
'floating_ip_address'],
|
||||
'port_id': ns_bridge_port['id']}}),
|
||||
mock.call(t_ctx,
|
||||
{'floatingip': {
|
||||
'floating_network_id': b_bridge_net_id,
|
||||
'floating_ip_address': '100.128.0.2',
|
||||
'port_id': b_port_id}})]
|
||||
mock_create.assert_has_calls(calls)
|
||||
|
||||
@patch.object(ipam_non_pluggable_backend.IpamNonPluggableBackend,
|
||||
'_allocate_specific_ip', new=_allocate_specific_ip)
|
||||
@patch.object(ipam_non_pluggable_backend.IpamNonPluggableBackend,
|
||||
'_generate_ip', new=fake_generate_ip)
|
||||
@patch.object(l3_db.L3_NAT_dbonly_mixin, '_make_router_dict',
|
||||
new=fake_make_router_dict)
|
||||
@patch.object(db_base_plugin_common.DbBasePluginCommon,
|
||||
'_make_subnet_dict', new=fake_make_subnet_dict)
|
||||
@patch.object(subnet_alloc.SubnetAllocator, '_lock_subnetpool',
|
||||
new=mock.Mock)
|
||||
@patch.object(l3_db.L3_NAT_dbonly_mixin, 'update_floatingip',
|
||||
new=mock.Mock)
|
||||
@patch.object(FakeClient, 'create_floatingips')
|
||||
@patch.object(context, 'get_context_from_neutron_context')
|
||||
def test_associate_floatingip_port_not_bound(self, mock_context,
|
||||
mock_create):
|
||||
plugin_path = 'tricircle.tests.unit.network.test_plugin.FakePlugin'
|
||||
cfg.CONF.set_override('core_plugin', plugin_path)
|
||||
|
||||
fake_plugin = FakePlugin()
|
||||
q_ctx = FakeNeutronContext()
|
||||
t_ctx = context.get_db_context()
|
||||
mock_context.return_value = t_ctx
|
||||
|
||||
(t_port_id, b_port_id,
|
||||
fip, e_net) = self._prepare_associate_floatingip_test(t_ctx, q_ctx,
|
||||
fake_plugin)
|
||||
# remove bottom port for this test case
|
||||
for port in BOTTOM1_PORTS:
|
||||
if port['id'] == b_port_id:
|
||||
BOTTOM1_PORTS.remove(port)
|
||||
break
|
||||
filters = [{'key': 'top_id', 'comparator': 'eq', 'value': t_port_id}]
|
||||
with t_ctx.session.begin():
|
||||
core.delete_resources(t_ctx, models.ResourceRouting, filters)
|
||||
|
||||
# associate floating ip
|
||||
fip_body = {'port_id': t_port_id}
|
||||
fake_plugin.update_floatingip(q_ctx, fip['id'],
|
||||
{'floatingip': fip_body})
|
||||
|
||||
b_ext_net_id = db_api.get_bottom_id_by_top_id_pod_name(
|
||||
t_ctx, e_net['id'], 'pod_2', constants.RT_NETWORK)
|
||||
b_port_id = db_api.get_bottom_id_by_top_id_pod_name(
|
||||
t_ctx, t_port_id, 'pod_1', constants.RT_PORT)
|
||||
for port in BOTTOM2_PORTS:
|
||||
if port['name'] == 'ns_bridge_port':
|
||||
ns_bridge_port = port
|
||||
for net in TOP_NETS:
|
||||
if net['name'].startswith('ns_bridge'):
|
||||
b_bridge_net_id = db_api.get_bottom_id_by_top_id_pod_name(
|
||||
t_ctx, net['id'], 'pod_1', constants.RT_NETWORK)
|
||||
calls = [mock.call(t_ctx,
|
||||
{'floatingip': {
|
||||
'floating_network_id': b_ext_net_id,
|
||||
'floating_ip_address': fip[
|
||||
'floating_ip_address'],
|
||||
'port_id': ns_bridge_port['id']}}),
|
||||
mock.call(t_ctx,
|
||||
{'floatingip': {
|
||||
'floating_network_id': b_bridge_net_id,
|
||||
'floating_ip_address': '100.128.0.2',
|
||||
'port_id': b_port_id}})]
|
||||
mock_create.assert_has_calls(calls)
|
||||
|
||||
@patch.object(ipam_non_pluggable_backend.IpamNonPluggableBackend,
|
||||
'_allocate_specific_ip', new=_allocate_specific_ip)
|
||||
@patch.object(ipam_non_pluggable_backend.IpamNonPluggableBackend,
|
||||
'_generate_ip', new=fake_generate_ip)
|
||||
@patch.object(l3_db.L3_NAT_dbonly_mixin, '_make_router_dict',
|
||||
new=fake_make_router_dict)
|
||||
@patch.object(db_base_plugin_common.DbBasePluginCommon,
|
||||
'_make_subnet_dict', new=fake_make_subnet_dict)
|
||||
@patch.object(subnet_alloc.SubnetAllocator, '_lock_subnetpool',
|
||||
new=mock.Mock)
|
||||
@patch.object(l3_db.L3_NAT_dbonly_mixin, 'update_floatingip',
|
||||
new=mock.Mock)
|
||||
@patch.object(FakePlugin, '_disassociate_floatingip')
|
||||
@patch.object(FakeClient, 'create_floatingips')
|
||||
@patch.object(context, 'get_context_from_neutron_context')
|
||||
def test_associate_floatingip_port_exception(
|
||||
self, mock_context, mock_create, mock_disassociate):
|
||||
plugin_path = 'tricircle.tests.unit.network.test_plugin.FakePlugin'
|
||||
cfg.CONF.set_override('core_plugin', plugin_path)
|
||||
|
||||
fake_plugin = FakePlugin()
|
||||
q_ctx = FakeNeutronContext()
|
||||
t_ctx = context.get_db_context()
|
||||
mock_context.return_value = t_ctx
|
||||
|
||||
(t_port_id, b_port_id,
|
||||
fip, e_net) = self._prepare_associate_floatingip_test(t_ctx, q_ctx,
|
||||
fake_plugin)
|
||||
|
||||
# associate floating ip and exception occurs
|
||||
mock_create.side_effect = q_exceptions.ConnectionFailed
|
||||
fip_body = {'port_id': t_port_id}
|
||||
self.assertRaises(q_exceptions.ConnectionFailed,
|
||||
fake_plugin.update_floatingip, q_ctx, fip['id'],
|
||||
{'floatingip': fip_body})
|
||||
mock_disassociate.assert_called_once_with(q_ctx, fip['id'])
|
||||
# check the association information is cleared
|
||||
self.assertIsNone(TOP_FLOATINGIPS[0]['fixed_port_id'])
|
||||
self.assertIsNone(TOP_FLOATINGIPS[0]['fixed_ip_address'])
|
||||
self.assertIsNone(TOP_FLOATINGIPS[0]['router_id'])
|
||||
|
||||
def tearDown(self):
|
||||
core.ModelBase.metadata.drop_all(core.get_engine())
|
||||
for res in RES_LIST:
|
||||
del res[:]
|
||||
cfg.CONF.unregister_opts(q_config.core_opts)
|
||||
|
@ -123,12 +123,14 @@ class XManagerTest(unittest.TestCase):
|
||||
port = {
|
||||
'network_id': network['id'],
|
||||
'device_id': router['id'],
|
||||
'device_owner': 'network:router_interface',
|
||||
'fixed_ips': [{'subnet_id': subnet['id'],
|
||||
'ip_address': subnet['gateway_ip']}]
|
||||
}
|
||||
bridge_port = {
|
||||
'network_id': bridge_network['id'],
|
||||
'device_id': router['id'],
|
||||
'device_owner': 'network:router_interface',
|
||||
'fixed_ips': [{'subnet_id': bridge_subnet['id'],
|
||||
'ip_address': bridge_subnet['gateway_ip']}]
|
||||
}
|
||||
@ -153,6 +155,7 @@ class XManagerTest(unittest.TestCase):
|
||||
'gateway_ip': '10.0.3.1'})
|
||||
BOTTOM1_PORT.append({'network_id': 'network_3_id',
|
||||
'device_id': 'router_1_id',
|
||||
'device_owner': 'network:router_interface',
|
||||
'fixed_ips': [{'subnet_id': 'subnet_3_id',
|
||||
'ip_address': '10.0.3.1'}]})
|
||||
|
||||
|
@ -143,20 +143,28 @@ class XManager(PeriodicTasks):
|
||||
b_inferfaces = bottom_client.list_ports(
|
||||
ctx, filters=[{'key': 'device_id',
|
||||
'comparator': 'eq',
|
||||
'value': b_router_ids[i]}])
|
||||
'value': b_router_ids[i]},
|
||||
{'key': 'device_owner',
|
||||
'comparator': 'eq',
|
||||
'value': 'network:router_interface'}])
|
||||
cidrs = []
|
||||
for b_inferface in b_inferfaces:
|
||||
ip = b_inferface['fixed_ips'][0]['ip_address']
|
||||
bridge_cidr = '100.0.0.0/8'
|
||||
if netaddr.IPAddress(ip) in netaddr.IPNetwork(bridge_cidr):
|
||||
ew_bridge_cidr = '100.0.0.0/9'
|
||||
ns_bridge_cidr = '100.128.0.0/9'
|
||||
if netaddr.IPAddress(ip) in netaddr.IPNetwork(ew_bridge_cidr):
|
||||
router_bridge_ip_map[b_router_ids[i]] = ip
|
||||
continue
|
||||
if netaddr.IPAddress(ip) in netaddr.IPNetwork(ns_bridge_cidr):
|
||||
continue
|
||||
b_subnet = bottom_client.get_subnets(
|
||||
ctx, b_inferface['fixed_ips'][0]['subnet_id'])
|
||||
cidrs.append(b_subnet['cidr'])
|
||||
router_cidr_map[b_router_ids[i]] = cidrs
|
||||
|
||||
for i, b_router_id in enumerate(b_router_ids):
|
||||
if b_router_id not in router_bridge_ip_map:
|
||||
continue
|
||||
bottom_client = self._get_client(pod_name=b_pods[i]['pod_name'])
|
||||
extra_routes = []
|
||||
for router_id, cidrs in router_cidr_map.iteritems():
|
||||
|
Loading…
x
Reference in New Issue
Block a user