Upgrade the rst convention of the Reference Guide [3]
We upgrade the rst convention by following Documentation Contributor Guide[1]. [1] https://docs.openstack.org/doc-contrib-guide Change-Id: Id480cd24f5eed810e81af0f12e84a4a6db49247d Partially-Implements: blueprint optimize-the-documentation-format
This commit is contained in:
parent
b42b1361ee
commit
0002de177e
@ -5,7 +5,7 @@ Manila in Kolla
|
||||
===============
|
||||
|
||||
Overview
|
||||
========
|
||||
~~~~~~~~
|
||||
Currently, Kolla can deploy following manila services:
|
||||
|
||||
* manila-api
|
||||
@ -19,7 +19,7 @@ management of share types as well as share snapshots if a driver supports
|
||||
them.
|
||||
|
||||
Important
|
||||
=========
|
||||
~~~~~~~~~
|
||||
|
||||
For simplicity, this guide describes configuring the Shared File Systems
|
||||
service to use the ``generic`` back end with the driver handles share
|
||||
@ -32,21 +32,25 @@ Before you proceed, ensure that Compute, Networking and Block storage
|
||||
services are properly working.
|
||||
|
||||
Preparation and Deployment
|
||||
==========================
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Cinder and Ceph are required, enable it in ``/etc/kolla/globals.yml``:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
enable_cinder: "yes"
|
||||
enable_ceph: "yes"
|
||||
enable_cinder: "yes"
|
||||
enable_ceph: "yes"
|
||||
|
||||
.. end
|
||||
|
||||
Enable Manila and generic back end in ``/etc/kolla/globals.yml``:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
enable_manila: "yes"
|
||||
enable_manila_backend_generic: "yes"
|
||||
enable_manila: "yes"
|
||||
enable_manila_backend_generic: "yes"
|
||||
|
||||
.. end
|
||||
|
||||
By default Manila uses instance flavor id 100 for its file systems. For Manila
|
||||
to work, either create a new nova flavor with id 100 (use *nova flavor-create*)
|
||||
@ -59,27 +63,32 @@ contents:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
[generic]
|
||||
service_instance_flavor_id = 2
|
||||
[generic]
|
||||
service_instance_flavor_id = 2
|
||||
|
||||
.. end
|
||||
|
||||
Verify Operation
|
||||
================
|
||||
~~~~~~~~~~~~~~~~
|
||||
|
||||
Verify operation of the Shared File Systems service. List service components
|
||||
to verify successful launch of each process:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# manila service-list
|
||||
+------------------+----------------+------+---------+-------+----------------------------+-----------------+
|
||||
| Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
|
||||
+------------------+----------------+------+---------+-------+----------------------------+-----------------+
|
||||
| manila-scheduler | controller | nova | enabled | up | 2014-10-18T01:30:54.000000 | None |
|
||||
| manila-share | share1@generic | nova | enabled | up | 2014-10-18T01:30:57.000000 | None |
|
||||
+------------------+----------------+------+---------+-------+----------------------------+-----------------+
|
||||
# manila service-list
|
||||
|
||||
+------------------+----------------+------+---------+-------+----------------------------+-----------------+
|
||||
| Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
|
||||
+------------------+----------------+------+---------+-------+----------------------------+-----------------+
|
||||
| manila-scheduler | controller | nova | enabled | up | 2014-10-18T01:30:54.000000 | None |
|
||||
| manila-share | share1@generic | nova | enabled | up | 2014-10-18T01:30:57.000000 | None |
|
||||
+------------------+----------------+------+---------+-------+----------------------------+-----------------+
|
||||
|
||||
.. end
|
||||
|
||||
Launch an Instance
|
||||
==================
|
||||
~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Before being able to create a share, the manila with the generic driver and the
|
||||
DHSS mode enabled requires the definition of at least an image, a network and a
|
||||
@ -88,205 +97,232 @@ configuration, the share server is an instance where NFS/CIFS shares are
|
||||
served.
|
||||
|
||||
Determine the configuration of the share server
|
||||
===============================================
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Create a default share type before running manila-share service:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# manila type-create default_share_type True
|
||||
+--------------------------------------+--------------------+------------+------------+-------------------------------------+-------------------------+
|
||||
| ID | Name | Visibility | is_default | required_extra_specs | optional_extra_specs |
|
||||
+--------------------------------------+--------------------+------------+------------+-------------------------------------+-------------------------+
|
||||
| 8a35da28-0f74-490d-afff-23664ecd4f01 | default_share_type | public | - | driver_handles_share_servers : True | snapshot_support : True |
|
||||
+--------------------------------------+--------------------+------------+------------+-------------------------------------+-------------------------+
|
||||
# manila type-create default_share_type True
|
||||
|
||||
+--------------------------------------+--------------------+------------+------------+-------------------------------------+-------------------------+
|
||||
| ID | Name | Visibility | is_default | required_extra_specs | optional_extra_specs |
|
||||
+--------------------------------------+--------------------+------------+------------+-------------------------------------+-------------------------+
|
||||
| 8a35da28-0f74-490d-afff-23664ecd4f01 | default_share_type | public | - | driver_handles_share_servers : True | snapshot_support : True |
|
||||
+--------------------------------------+--------------------+------------+------------+-------------------------------------+-------------------------+
|
||||
|
||||
.. end
|
||||
|
||||
Create a manila share server image to the Image service:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# wget http://tarballs.openstack.org/manila-image-elements/images/manila-service-image-master.qcow2
|
||||
# glance image-create --name "manila-service-image" \
|
||||
--file manila-service-image-master.qcow2 \
|
||||
--disk-format qcow2 --container-format bare \
|
||||
--visibility public --progress
|
||||
[=============================>] 100%
|
||||
+------------------+--------------------------------------+
|
||||
| Property | Value |
|
||||
+------------------+--------------------------------------+
|
||||
| checksum | 48a08e746cf0986e2bc32040a9183445 |
|
||||
| container_format | bare |
|
||||
| created_at | 2016-01-26T19:52:24Z |
|
||||
| disk_format | qcow2 |
|
||||
| id | 1fc7f29e-8fe6-44ef-9c3c-15217e83997c |
|
||||
| min_disk | 0 |
|
||||
| min_ram | 0 |
|
||||
| name | manila-service-image |
|
||||
| owner | e2c965830ecc4162a002bf16ddc91ab7 |
|
||||
| protected | False |
|
||||
| size | 306577408 |
|
||||
| status | active |
|
||||
| tags | [] |
|
||||
| updated_at | 2016-01-26T19:52:28Z |
|
||||
| virtual_size | None |
|
||||
| visibility | public |
|
||||
+------------------+--------------------------------------+
|
||||
# wget http://tarballs.openstack.org/manila-image-elements/images/manila-service-image-master.qcow2
|
||||
# glance image-create --name "manila-service-image" \
|
||||
--file manila-service-image-master.qcow2 \
|
||||
--disk-format qcow2 --container-format bare \
|
||||
--visibility public --progress
|
||||
|
||||
[=============================>] 100%
|
||||
+------------------+--------------------------------------+
|
||||
| Property | Value |
|
||||
+------------------+--------------------------------------+
|
||||
| checksum | 48a08e746cf0986e2bc32040a9183445 |
|
||||
| container_format | bare |
|
||||
| created_at | 2016-01-26T19:52:24Z |
|
||||
| disk_format | qcow2 |
|
||||
| id | 1fc7f29e-8fe6-44ef-9c3c-15217e83997c |
|
||||
| min_disk | 0 |
|
||||
| min_ram | 0 |
|
||||
| name | manila-service-image |
|
||||
| owner | e2c965830ecc4162a002bf16ddc91ab7 |
|
||||
| protected | False |
|
||||
| size | 306577408 |
|
||||
| status | active |
|
||||
| tags | [] |
|
||||
| updated_at | 2016-01-26T19:52:28Z |
|
||||
| virtual_size | None |
|
||||
| visibility | public |
|
||||
+------------------+--------------------------------------+
|
||||
|
||||
.. end
|
||||
|
||||
List available networks to get id and subnets of the private network:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
+--------------------------------------+---------+----------------------------------------------------+
|
||||
| id | name | subnets |
|
||||
+--------------------------------------+---------+----------------------------------------------------+
|
||||
| 0e62efcd-8cee-46c7-b163-d8df05c3c5ad | public | 5cc70da8-4ee7-4565-be53-b9c011fca011 10.3.31.0/24 |
|
||||
| 7c6f9b37-76b4-463e-98d8-27e5686ed083 | private | 3482f524-8bff-4871-80d4-5774c2730728 172.16.1.0/24 |
|
||||
+--------------------------------------+---------+----------------------------------------------------+
|
||||
+--------------------------------------+---------+----------------------------------------------------+
|
||||
| id | name | subnets |
|
||||
+--------------------------------------+---------+----------------------------------------------------+
|
||||
| 0e62efcd-8cee-46c7-b163-d8df05c3c5ad | public | 5cc70da8-4ee7-4565-be53-b9c011fca011 10.3.31.0/24 |
|
||||
| 7c6f9b37-76b4-463e-98d8-27e5686ed083 | private | 3482f524-8bff-4871-80d4-5774c2730728 172.16.1.0/24 |
|
||||
+--------------------------------------+---------+----------------------------------------------------+
|
||||
|
||||
.. end
|
||||
|
||||
Create a shared network
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# manila share-network-create --name demo-share-network1 \
|
||||
--neutron-net-id PRIVATE_NETWORK_ID \
|
||||
--neutron-subnet-id PRIVATE_NETWORK_SUBNET_ID
|
||||
+-------------------+--------------------------------------+
|
||||
| Property | Value |
|
||||
+-------------------+--------------------------------------+
|
||||
| name | demo-share-network1 |
|
||||
| segmentation_id | None |
|
||||
| created_at | 2016-01-26T20:03:41.877838 |
|
||||
| neutron_subnet_id | 3482f524-8bff-4871-80d4-5774c2730728 |
|
||||
| updated_at | None |
|
||||
| network_type | None |
|
||||
| neutron_net_id | 7c6f9b37-76b4-463e-98d8-27e5686ed083 |
|
||||
| ip_version | None |
|
||||
| nova_net_id | None |
|
||||
| cidr | None |
|
||||
| project_id | e2c965830ecc4162a002bf16ddc91ab7 |
|
||||
| id | 58b2f0e6-5509-4830-af9c-97f525a31b14 |
|
||||
| description | None |
|
||||
+-------------------+--------------------------------------+
|
||||
# manila share-network-create --name demo-share-network1 \
|
||||
--neutron-net-id PRIVATE_NETWORK_ID \
|
||||
--neutron-subnet-id PRIVATE_NETWORK_SUBNET_ID
|
||||
|
||||
+-------------------+--------------------------------------+
|
||||
| Property | Value |
|
||||
+-------------------+--------------------------------------+
|
||||
| name | demo-share-network1 |
|
||||
| segmentation_id | None |
|
||||
| created_at | 2016-01-26T20:03:41.877838 |
|
||||
| neutron_subnet_id | 3482f524-8bff-4871-80d4-5774c2730728 |
|
||||
| updated_at | None |
|
||||
| network_type | None |
|
||||
| neutron_net_id | 7c6f9b37-76b4-463e-98d8-27e5686ed083 |
|
||||
| ip_version | None |
|
||||
| nova_net_id | None |
|
||||
| cidr | None |
|
||||
| project_id | e2c965830ecc4162a002bf16ddc91ab7 |
|
||||
| id | 58b2f0e6-5509-4830-af9c-97f525a31b14 |
|
||||
| description | None |
|
||||
+-------------------+--------------------------------------+
|
||||
|
||||
.. end
|
||||
|
||||
Create a flavor (**Required** if you not defined *manila_instance_flavor_id* in
|
||||
``/etc/kolla/config/manila-share.conf`` file)
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# nova flavor-create manila-service-flavor 100 128 0 1
|
||||
# nova flavor-create manila-service-flavor 100 128 0 1
|
||||
|
||||
.. end
|
||||
|
||||
Create a share
|
||||
==============
|
||||
~~~~~~~~~~~~~~
|
||||
|
||||
Create a NFS share using the share network:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# manila create NFS 1 --name demo-share1 --share-network demo-share-network1
|
||||
+-----------------------------+--------------------------------------+
|
||||
| Property | Value |
|
||||
+-----------------------------+--------------------------------------+
|
||||
| status | None |
|
||||
| share_type_name | None |
|
||||
| description | None |
|
||||
| availability_zone | None |
|
||||
| share_network_id | None |
|
||||
| export_locations | [] |
|
||||
| host | None |
|
||||
| snapshot_id | None |
|
||||
| is_public | False |
|
||||
| task_state | None |
|
||||
| snapshot_support | True |
|
||||
| id | 016ca18f-bdd5-48e1-88c0-782e4c1aa28c |
|
||||
| size | 1 |
|
||||
| name | demo-share1 |
|
||||
| share_type | None |
|
||||
| created_at | 2016-01-26T20:08:50.502877 |
|
||||
| export_location | None |
|
||||
| share_proto | NFS |
|
||||
| consistency_group_id | None |
|
||||
| source_cgsnapshot_member_id | None |
|
||||
| project_id | 48e8c35b2ac6495d86d4be61658975e7 |
|
||||
| metadata | {} |
|
||||
+-----------------------------+--------------------------------------+
|
||||
# manila create NFS 1 --name demo-share1 --share-network demo-share-network1
|
||||
|
||||
+-----------------------------+--------------------------------------+
|
||||
| Property | Value |
|
||||
+-----------------------------+--------------------------------------+
|
||||
| status | None |
|
||||
| share_type_name | None |
|
||||
| description | None |
|
||||
| availability_zone | None |
|
||||
| share_network_id | None |
|
||||
| export_locations | [] |
|
||||
| host | None |
|
||||
| snapshot_id | None |
|
||||
| is_public | False |
|
||||
| task_state | None |
|
||||
| snapshot_support | True |
|
||||
| id | 016ca18f-bdd5-48e1-88c0-782e4c1aa28c |
|
||||
| size | 1 |
|
||||
| name | demo-share1 |
|
||||
| share_type | None |
|
||||
| created_at | 2016-01-26T20:08:50.502877 |
|
||||
| export_location | None |
|
||||
| share_proto | NFS |
|
||||
| consistency_group_id | None |
|
||||
| source_cgsnapshot_member_id | None |
|
||||
| project_id | 48e8c35b2ac6495d86d4be61658975e7 |
|
||||
| metadata | {} |
|
||||
+-----------------------------+--------------------------------------+
|
||||
|
||||
.. end
|
||||
|
||||
After some time, the share status should change from ``creating``
|
||||
to ``available``:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# manila list
|
||||
+--------------------------------------+-------------+------+-------------+-----------+-----------+--------------------------------------+-----------------------------+-------------------+
|
||||
| ID | Name | Size | Share Proto | Status | Is Public | Share Type Name | Host | Availability Zone |
|
||||
+--------------------------------------+-------------+------+-------------+-----------+-----------+--------------------------------------+-----------------------------+-------------------+
|
||||
| e1e06b14-ba17-48d4-9e0b-ca4d59823166 | demo-share1 | 1 | NFS | available | False | default_share_type | share1@generic#GENERIC | nova |
|
||||
+--------------------------------------+-------------+------+-------------+-----------+-----------+--------------------------------------+-----------------------------+-------------------+
|
||||
# manila list
|
||||
|
||||
+--------------------------------------+-------------+------+-------------+-----------+-----------+--------------------------------------+-----------------------------+-------------------+
|
||||
| ID | Name | Size | Share Proto | Status | Is Public | Share Type Name | Host | Availability Zone |
|
||||
+--------------------------------------+-------------+------+-------------+-----------+-----------+--------------------------------------+-----------------------------+-------------------+
|
||||
| e1e06b14-ba17-48d4-9e0b-ca4d59823166 | demo-share1 | 1 | NFS | available | False | default_share_type | share1@generic#GENERIC | nova |
|
||||
+--------------------------------------+-------------+------+-------------+-----------+-----------+--------------------------------------+-----------------------------+-------------------+
|
||||
|
||||
.. end
|
||||
|
||||
Configure user access to the new share before attempting to mount it via the
|
||||
network:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# manila access-allow demo-share1 ip INSTANCE_PRIVATE_NETWORK_IP
|
||||
# manila access-allow demo-share1 ip INSTANCE_PRIVATE_NETWORK_IP
|
||||
|
||||
.. end
|
||||
|
||||
Mount the share from an instance
|
||||
================================
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Get export location from share
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# manila show demo-share1
|
||||
+-----------------------------+----------------------------------------------------------------------+
|
||||
| Property | Value |
|
||||
+-----------------------------+----------------------------------------------------------------------+
|
||||
| status | available |
|
||||
| share_type_name | default_share_type |
|
||||
| description | None |
|
||||
| availability_zone | nova |
|
||||
| share_network_id | fa07a8c3-598d-47b5-8ae2-120248ec837f |
|
||||
| export_locations | |
|
||||
| | path = 10.254.0.3:/shares/share-422dc546-8f37-472b-ac3c-d23fe410d1b6 |
|
||||
| | preferred = False |
|
||||
| | is_admin_only = False |
|
||||
| | id = 5894734d-8d9a-49e4-b53e-7154c9ce0882 |
|
||||
| | share_instance_id = 422dc546-8f37-472b-ac3c-d23fe410d1b6 |
|
||||
| share_server_id | 4782feef-61c8-4ffb-8d95-69fbcc380a52 |
|
||||
| host | share1@generic#GENERIC |
|
||||
| access_rules_status | active |
|
||||
| snapshot_id | None |
|
||||
| is_public | False |
|
||||
| task_state | None |
|
||||
| snapshot_support | True |
|
||||
| id | e1e06b14-ba17-48d4-9e0b-ca4d59823166 |
|
||||
| size | 1 |
|
||||
| name | demo-share1 |
|
||||
| share_type | 6e1e803f-1c37-4660-a65a-c1f2b54b6e17 |
|
||||
| has_replicas | False |
|
||||
| replication_type | None |
|
||||
| created_at | 2016-03-15T18:59:12.000000 |
|
||||
| share_proto | NFS |
|
||||
| consistency_group_id | None |
|
||||
| source_cgsnapshot_member_id | None |
|
||||
| project_id | 9dc02df0f2494286ba0252b3c81c01d0 |
|
||||
| metadata | {} |
|
||||
+-----------------------------+----------------------------------------------------------------------+
|
||||
# manila show demo-share1
|
||||
|
||||
+-----------------------------+----------------------------------------------------------------------+
|
||||
| Property | Value |
|
||||
+-----------------------------+----------------------------------------------------------------------+
|
||||
| status | available |
|
||||
| share_type_name | default_share_type |
|
||||
| description | None |
|
||||
| availability_zone | nova |
|
||||
| share_network_id | fa07a8c3-598d-47b5-8ae2-120248ec837f |
|
||||
| export_locations | |
|
||||
| | path = 10.254.0.3:/shares/share-422dc546-8f37-472b-ac3c-d23fe410d1b6 |
|
||||
| | preferred = False |
|
||||
| | is_admin_only = False |
|
||||
| | id = 5894734d-8d9a-49e4-b53e-7154c9ce0882 |
|
||||
| | share_instance_id = 422dc546-8f37-472b-ac3c-d23fe410d1b6 |
|
||||
| share_server_id | 4782feef-61c8-4ffb-8d95-69fbcc380a52 |
|
||||
| host | share1@generic#GENERIC |
|
||||
| access_rules_status | active |
|
||||
| snapshot_id | None |
|
||||
| is_public | False |
|
||||
| task_state | None |
|
||||
| snapshot_support | True |
|
||||
| id | e1e06b14-ba17-48d4-9e0b-ca4d59823166 |
|
||||
| size | 1 |
|
||||
| name | demo-share1 |
|
||||
| share_type | 6e1e803f-1c37-4660-a65a-c1f2b54b6e17 |
|
||||
| has_replicas | False |
|
||||
| replication_type | None |
|
||||
| created_at | 2016-03-15T18:59:12.000000 |
|
||||
| share_proto | NFS |
|
||||
| consistency_group_id | None |
|
||||
| source_cgsnapshot_member_id | None |
|
||||
| project_id | 9dc02df0f2494286ba0252b3c81c01d0 |
|
||||
| metadata | {} |
|
||||
+-----------------------------+----------------------------------------------------------------------+
|
||||
|
||||
.. end
|
||||
|
||||
Create a folder where the mount will be placed:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# mkdir ~/test_folder
|
||||
# mkdir ~/test_folder
|
||||
|
||||
.. end
|
||||
|
||||
Mount the NFS share in the instance using the export location of the share:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# mount -v 10.254.0.3:/shares/share-422dc546-8f37-472b-ac3c-d23fe410d1b6 ~/test_folder
|
||||
# mount -v 10.254.0.3:/shares/share-422dc546-8f37-472b-ac3c-d23fe410d1b6 ~/test_folder
|
||||
|
||||
.. end
|
||||
|
||||
Share Migration
|
||||
===============
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
As administrator, you can migrate a share with its data from one location to
|
||||
another in a manner that is transparent to users and workloads. You can use
|
||||
@ -297,25 +333,29 @@ provider network for ``data_node_access_ip``.
|
||||
|
||||
Modify the file ``/etc/kolla/config/manila.conf`` and add the contents:
|
||||
|
||||
.. code-block:: console
|
||||
.. path /etc/kolla/config/manila.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
data_node_access_ip = 10.10.10.199
|
||||
[DEFAULT]
|
||||
data_node_access_ip = 10.10.10.199
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
Share migration requires have more than one back end configured.
|
||||
For details, see :ref:`hnas_configure_multiple_back_ends`.
|
||||
Share migration requires have more than one back end configured.
|
||||
For details, see :ref:`hnas_configure_multiple_back_ends`.
|
||||
|
||||
Use the manila migration command, as shown in the following example:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
manila migration-start --preserve-metadata True|False \
|
||||
--writable True|False --force_host_assisted_migration True|False \
|
||||
--new_share_type share_type --new_share_network share_network \
|
||||
shareID destinationHost
|
||||
# manila migration-start --preserve-metadata True|False \
|
||||
--writable True|False --force_host_assisted_migration True|False \
|
||||
--new_share_type share_type --new_share_network share_network \
|
||||
shareID destinationHost
|
||||
|
||||
.. end
|
||||
|
||||
- ``--force-host-copy``: Forces the generic host-based migration mechanism and
|
||||
bypasses any driver optimizations.
|
||||
@ -328,28 +368,31 @@ Use the manila migration command, as shown in the following example:
|
||||
Checking share migration progress
|
||||
---------------------------------
|
||||
|
||||
Use the ``manila migration-get-progress shareID`` command to check progress.
|
||||
Use the :command:`manila migration-get-progress shareID` command to check progress.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
manila migration-get-progress demo-share1
|
||||
+----------------+-----------------------+
|
||||
| Property | Value |
|
||||
+----------------+-----------------------+
|
||||
| task_state | data_copying_starting |
|
||||
| total_progress | 0 |
|
||||
+----------------+-----------------------+
|
||||
# manila migration-get-progress demo-share1
|
||||
|
||||
manila migration-get-progress demo-share1
|
||||
+----------------+-------------------------+
|
||||
| Property | Value |
|
||||
+----------------+-------------------------+
|
||||
| task_state | data_copying_completing |
|
||||
| total_progress | 100 |
|
||||
+----------------+-------------------------+
|
||||
+----------------+-----------------------+
|
||||
| Property | Value |
|
||||
+----------------+-----------------------+
|
||||
| task_state | data_copying_starting |
|
||||
| total_progress | 0 |
|
||||
+----------------+-----------------------+
|
||||
|
||||
Use the ``manila migration-complete shareID`` command to complete share
|
||||
migration process
|
||||
# manila migration-get-progress demo-share1
|
||||
+----------------+-------------------------+
|
||||
| Property | Value |
|
||||
+----------------+-------------------------+
|
||||
| task_state | data_copying_completing |
|
||||
| total_progress | 100 |
|
||||
+----------------+-------------------------+
|
||||
|
||||
.. end
|
||||
|
||||
Use the :command:`manila migration-complete shareID` command to complete share
|
||||
migration process.
|
||||
|
||||
For more information about how to manage shares, see the
|
||||
`Manage shares
|
||||
|
@ -5,7 +5,7 @@ Hitachi NAS Platform File Services Driver for OpenStack
|
||||
========================================================
|
||||
|
||||
Overview
|
||||
========
|
||||
~~~~~~~~
|
||||
The Hitachi NAS Platform File Services Driver for OpenStack
|
||||
provides NFS Shared File Systems to OpenStack.
|
||||
|
||||
@ -54,12 +54,12 @@ The following operations are supported:
|
||||
|
||||
|
||||
Preparation and Deployment
|
||||
==========================
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. note::
|
||||
|
||||
The manila-share node only requires the HNAS EVS data interface if you
|
||||
plan to use share migration.
|
||||
The manila-share node only requires the HNAS EVS data interface if you
|
||||
plan to use share migration.
|
||||
|
||||
.. important ::
|
||||
|
||||
@ -75,10 +75,12 @@ Configuration on Kolla deployment
|
||||
Enable Shared File Systems service and HNAS driver in
|
||||
``/etc/kolla/globals.yml``
|
||||
|
||||
.. code-block:: console
|
||||
.. code-block:: yaml
|
||||
|
||||
enable_manila: "yes"
|
||||
enable_manila_backend_hnas: "yes"
|
||||
enable_manila: "yes"
|
||||
enable_manila_backend_hnas: "yes"
|
||||
|
||||
.. end
|
||||
|
||||
Configure the OpenStack networking so it can reach HNAS Management
|
||||
interface and HNAS EVS Data interface.
|
||||
@ -88,31 +90,31 @@ ports eth1 and eth2 associated respectively:
|
||||
|
||||
In ``/etc/kolla/globals.yml`` set:
|
||||
|
||||
.. code-block:: console
|
||||
.. code-block:: yaml
|
||||
|
||||
neutron_bridge_name: "br-ex,br-ex2"
|
||||
neutron_external_interface: "eth1,eth2"
|
||||
neutron_bridge_name: "br-ex,br-ex2"
|
||||
neutron_external_interface: "eth1,eth2"
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
eth1: Neutron external interface.
|
||||
eth2: HNAS EVS data interface.
|
||||
|
||||
``eth1`` is used to Neutron external interface and ``eth2`` is
|
||||
used to HNAS EVS data interface.
|
||||
|
||||
HNAS back end configuration
|
||||
---------------------------
|
||||
|
||||
In ``/etc/kolla/globals.yml`` uncomment and set:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
hnas_ip: "172.24.44.15"
|
||||
hnas_user: "supervisor"
|
||||
hnas_password: "supervisor"
|
||||
hnas_evs_id: "1"
|
||||
hnas_evs_ip: "10.0.1.20"
|
||||
hnas_file_system_name: "FS-Manila"
|
||||
.. code-block:: yaml
|
||||
|
||||
hnas_ip: "172.24.44.15"
|
||||
hnas_user: "supervisor"
|
||||
hnas_password: "supervisor"
|
||||
hnas_evs_id: "1"
|
||||
hnas_evs_ip: "10.0.1.20"
|
||||
hnas_file_system_name: "FS-Manila"
|
||||
|
||||
Configuration on HNAS
|
||||
---------------------
|
||||
@ -123,7 +125,9 @@ List the available tenants:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack project list
|
||||
$ openstack project list
|
||||
|
||||
.. end
|
||||
|
||||
Create a network to the given tenant (service), providing the tenant ID,
|
||||
a name for the network, the name of the physical network over which the
|
||||
@ -132,14 +136,18 @@ which the virtual network is implemented:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron net-create --tenant-id <SERVICE_ID> hnas_network \
|
||||
--provider:physical_network=physnet2 --provider:network_type=flat
|
||||
$ neutron net-create --tenant-id <SERVICE_ID> hnas_network \
|
||||
--provider:physical_network=physnet2 --provider:network_type=flat
|
||||
|
||||
.. end
|
||||
|
||||
*Optional* - List available networks:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron net-list
|
||||
$ neutron net-list
|
||||
|
||||
.. end
|
||||
|
||||
Create a subnet to the same tenant (service), the gateway IP of this subnet,
|
||||
a name for the subnet, the network ID created before, and the CIDR of
|
||||
@ -147,28 +155,34 @@ subnet:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron subnet-create --tenant-id <SERVICE_ID> --gateway <GATEWAY> \
|
||||
--name hnas_subnet <NETWORK_ID> <SUBNET_CIDR>
|
||||
$ neutron subnet-create --tenant-id <SERVICE_ID> --gateway <GATEWAY> \
|
||||
--name hnas_subnet <NETWORK_ID> <SUBNET_CIDR>
|
||||
|
||||
.. end
|
||||
|
||||
*Optional* - List available subnets:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron subnet-list
|
||||
$ neutron subnet-list
|
||||
|
||||
.. end
|
||||
|
||||
Add the subnet interface to a router, providing the router ID and subnet
|
||||
ID created before:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron router-interface-add <ROUTER_ID> <SUBNET_ID>
|
||||
$ neutron router-interface-add <ROUTER_ID> <SUBNET_ID>
|
||||
|
||||
.. end
|
||||
|
||||
Create a file system on HNAS. See the `Hitachi HNAS reference <http://www.hds.com/assets/pdf/hus-file-module-file-services-administration-guide.pdf>`_.
|
||||
|
||||
.. important ::
|
||||
|
||||
Make sure that the filesystem is not created as a replication target.
|
||||
Refer official HNAS administration guide.
|
||||
Make sure that the filesystem is not created as a replication target.
|
||||
Refer official HNAS administration guide.
|
||||
|
||||
Prepare the HNAS EVS network.
|
||||
|
||||
@ -176,99 +190,110 @@ Create a route in HNAS to the tenant network:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ console-context --evs <EVS_ID_IN_USE> route-net-add --gateway <FLAT_NETWORK_GATEWAY> \
|
||||
<TENANT_PRIVATE_NETWORK>
|
||||
$ console-context --evs <EVS_ID_IN_USE> route-net-add --gateway <FLAT_NETWORK_GATEWAY> \
|
||||
<TENANT_PRIVATE_NETWORK>
|
||||
|
||||
.. end
|
||||
|
||||
.. important ::
|
||||
|
||||
Make sure multi-tenancy is enabled and routes are configured per EVS.
|
||||
Make sure multi-tenancy is enabled and routes are configured per EVS.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ console-context --evs 3 route-net-add --gateway 192.168.1.1 \
|
||||
10.0.0.0/24
|
||||
$ console-context --evs 3 route-net-add --gateway 192.168.1.1 \
|
||||
10.0.0.0/24
|
||||
|
||||
.. end
|
||||
|
||||
Create a share
|
||||
==============
|
||||
~~~~~~~~~~~~~~
|
||||
|
||||
Create a default share type before running manila-share service:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ manila type-create default_share_hitachi False
|
||||
$ manila type-create default_share_hitachi False
|
||||
|
||||
+--------------------------------------+-----------------------+------------+------------+--------------------------------------+-------------------------+
|
||||
| ID | Name | visibility | is_default | required_extra_specs | optional_extra_specs |
|
||||
+--------------------------------------+-----------------------+------------+------------+--------------------------------------+-------------------------+
|
||||
| 3e54c8a2-1e50-455e-89a0-96bb52876c35 | default_share_hitachi | public | - | driver_handles_share_servers : False | snapshot_support : True |
|
||||
+--------------------------------------+-----------------------+------------+------------+--------------------------------------+-------------------------+
|
||||
+--------------------------------------+-----------------------+------------+------------+--------------------------------------+-------------------------+
|
||||
| ID | Name | visibility | is_default | required_extra_specs | optional_extra_specs |
|
||||
+--------------------------------------+-----------------------+------------+------------+--------------------------------------+-------------------------+
|
||||
| 3e54c8a2-1e50-455e-89a0-96bb52876c35 | default_share_hitachi | public | - | driver_handles_share_servers : False | snapshot_support : True |
|
||||
+--------------------------------------+-----------------------+------------+------------+--------------------------------------+-------------------------+
|
||||
|
||||
.. end
|
||||
|
||||
Create a NFS share using the HNAS back end:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
manila create NFS 1 \
|
||||
--name mysharehnas \
|
||||
--description "My Manila share" \
|
||||
--share-type default_share_hitachi
|
||||
$ manila create NFS 1 \
|
||||
--name mysharehnas \
|
||||
--description "My Manila share" \
|
||||
--share-type default_share_hitachi
|
||||
|
||||
Verify Operation
|
||||
.. end
|
||||
|
||||
Verify Operation:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ manila list
|
||||
$ manila list
|
||||
|
||||
+--------------------------------------+----------------+------+-------------+-----------+-----------+-----------------------+-------------------------+-------------------+
|
||||
| ID | Name | Size | Share Proto | Status | Is Public | Share Type Name | Host | Availability Zone |
|
||||
+--------------------------------------+----------------+------+-------------+-----------+-----------+-----------------------+-------------------------+-------------------+
|
||||
| 721c0a6d-eea6-41af-8c10-72cd98985203 | mysharehnas | 1 | NFS | available | False | default_share_hitachi | control@hnas1#HNAS1 | nova |
|
||||
+--------------------------------------+----------------+------+-------------+-----------+-----------+-----------------------+-------------------------+-------------------+
|
||||
+--------------------------------------+----------------+------+-------------+-----------+-----------+-----------------------+-------------------------+-------------------+
|
||||
| ID | Name | Size | Share Proto | Status | Is Public | Share Type Name | Host | Availability Zone |
|
||||
+--------------------------------------+----------------+------+-------------+-----------+-----------+-----------------------+-------------------------+-------------------+
|
||||
| 721c0a6d-eea6-41af-8c10-72cd98985203 | mysharehnas | 1 | NFS | available | False | default_share_hitachi | control@hnas1#HNAS1 | nova |
|
||||
+--------------------------------------+----------------+------+-------------+-----------+-----------+-----------------------+-------------------------+-------------------+
|
||||
|
||||
.. end
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ manila show mysharehnas
|
||||
$ manila show mysharehnas
|
||||
|
||||
+-----------------------------+-----------------------------------------------------------------+
|
||||
| Property | Value |
|
||||
+-----------------------------+-----------------------------------------------------------------+
|
||||
| status | available |
|
||||
| share_type_name | default_share_hitachi |
|
||||
| description | My Manila share |
|
||||
| availability_zone | nova |
|
||||
| share_network_id | None |
|
||||
| export_locations | |
|
||||
| | path = 172.24.53.1:/shares/45ed6670-688b-4cf0-bfe7-34956648fb84 |
|
||||
| | preferred = False |
|
||||
| | is_admin_only = False |
|
||||
| | id = e81e716f-f1bd-47b2-8a56-2c2f9e33a98e |
|
||||
| | share_instance_id = 45ed6670-688b-4cf0-bfe7-34956648fb84 |
|
||||
| share_server_id | None |
|
||||
| host | control@hnas1#HNAS1 |
|
||||
| access_rules_status | active |
|
||||
| snapshot_id | None |
|
||||
| is_public | False |
|
||||
| task_state | None |
|
||||
| snapshot_support | True |
|
||||
| id | 721c0a6d-eea6-41af-8c10-72cd98985203 |
|
||||
| size | 1 |
|
||||
| user_id | ba7f6d543713488786b4b8cb093e7873 |
|
||||
| name | mysharehnas |
|
||||
| share_type | 3e54c8a2-1e50-455e-89a0-96bb52876c35 |
|
||||
| has_replicas | False |
|
||||
| replication_type | None |
|
||||
| created_at | 2016-10-14T14:50:47.000000 |
|
||||
| share_proto | NFS |
|
||||
| consistency_group_id | None |
|
||||
| source_cgsnapshot_member_id | None |
|
||||
| project_id | c3810d8bcc3346d0bdc8100b09abbbf1 |
|
||||
| metadata | {} |
|
||||
+-----------------------------+-----------------------------------------------------------------+
|
||||
+-----------------------------+-----------------------------------------------------------------+
|
||||
| Property | Value |
|
||||
+-----------------------------+-----------------------------------------------------------------+
|
||||
| status | available |
|
||||
| share_type_name | default_share_hitachi |
|
||||
| description | My Manila share |
|
||||
| availability_zone | nova |
|
||||
| share_network_id | None |
|
||||
| export_locations | |
|
||||
| | path = 172.24.53.1:/shares/45ed6670-688b-4cf0-bfe7-34956648fb84 |
|
||||
| | preferred = False |
|
||||
| | is_admin_only = False |
|
||||
| | id = e81e716f-f1bd-47b2-8a56-2c2f9e33a98e |
|
||||
| | share_instance_id = 45ed6670-688b-4cf0-bfe7-34956648fb84 |
|
||||
| share_server_id | None |
|
||||
| host | control@hnas1#HNAS1 |
|
||||
| access_rules_status | active |
|
||||
| snapshot_id | None |
|
||||
| is_public | False |
|
||||
| task_state | None |
|
||||
| snapshot_support | True |
|
||||
| id | 721c0a6d-eea6-41af-8c10-72cd98985203 |
|
||||
| size | 1 |
|
||||
| user_id | ba7f6d543713488786b4b8cb093e7873 |
|
||||
| name | mysharehnas |
|
||||
| share_type | 3e54c8a2-1e50-455e-89a0-96bb52876c35 |
|
||||
| has_replicas | False |
|
||||
| replication_type | None |
|
||||
| created_at | 2016-10-14T14:50:47.000000 |
|
||||
| share_proto | NFS |
|
||||
| consistency_group_id | None |
|
||||
| source_cgsnapshot_member_id | None |
|
||||
| project_id | c3810d8bcc3346d0bdc8100b09abbbf1 |
|
||||
| metadata | {} |
|
||||
+-----------------------------+-----------------------------------------------------------------+
|
||||
|
||||
.. end
|
||||
|
||||
.. _hnas_configure_multiple_back_ends:
|
||||
|
||||
Configure multiple back ends
|
||||
============================
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
An administrator can configure an instance of Manila to provision shares from
|
||||
one or more back ends. Each back end leverages an instance of a vendor-specific
|
||||
@ -283,45 +308,51 @@ the default share backends before deployment.
|
||||
|
||||
Modify the file ``/etc/kolla/config/manila.conf`` and add the contents:
|
||||
|
||||
.. code-block:: console
|
||||
.. path /etc/kolla/config/manila.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
enabled_share_backends = generic,hnas1,hnas2
|
||||
[DEFAULT]
|
||||
enabled_share_backends = generic,hnas1,hnas2
|
||||
|
||||
.. end
|
||||
|
||||
Modify the file ``/etc/kolla/config/manila-share.conf`` and add the contents:
|
||||
|
||||
.. code-block:: console
|
||||
.. path /etc/kolla/config/manila-share.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[generic]
|
||||
share_driver = manila.share.drivers.generic.GenericShareDriver
|
||||
interface_driver = manila.network.linux.interface.OVSInterfaceDriver
|
||||
driver_handles_share_servers = True
|
||||
service_instance_password = manila
|
||||
service_instance_user = manila
|
||||
service_image_name = manila-service-image
|
||||
share_backend_name = GENERIC
|
||||
[generic]
|
||||
share_driver = manila.share.drivers.generic.GenericShareDriver
|
||||
interface_driver = manila.network.linux.interface.OVSInterfaceDriver
|
||||
driver_handles_share_servers = True
|
||||
service_instance_password = manila
|
||||
service_instance_user = manila
|
||||
service_image_name = manila-service-image
|
||||
share_backend_name = GENERIC
|
||||
|
||||
[hnas1]
|
||||
share_backend_name = HNAS1
|
||||
share_driver = manila.share.drivers.hitachi.hnas.driver.HitachiHNASDriver
|
||||
driver_handles_share_servers = False
|
||||
hitachi_hnas_ip = <hnas_ip>
|
||||
hitachi_hnas_user = <user>
|
||||
hitachi_hnas_password = <password>
|
||||
hitachi_hnas_evs_id = <evs_id>
|
||||
hitachi_hnas_evs_ip = <evs_ip>
|
||||
hitachi_hnas_file_system_name = FS-Manila1
|
||||
[hnas1]
|
||||
share_backend_name = HNAS1
|
||||
share_driver = manila.share.drivers.hitachi.hnas.driver.HitachiHNASDriver
|
||||
driver_handles_share_servers = False
|
||||
hitachi_hnas_ip = <hnas_ip>
|
||||
hitachi_hnas_user = <user>
|
||||
hitachi_hnas_password = <password>
|
||||
hitachi_hnas_evs_id = <evs_id>
|
||||
hitachi_hnas_evs_ip = <evs_ip>
|
||||
hitachi_hnas_file_system_name = FS-Manila1
|
||||
|
||||
[hnas2]
|
||||
share_backend_name = HNAS2
|
||||
share_driver = manila.share.drivers.hitachi.hnas.driver.HitachiHNASDriver
|
||||
driver_handles_share_servers = False
|
||||
hitachi_hnas_ip = <hnas_ip>
|
||||
hitachi_hnas_user = <user>
|
||||
hitachi_hnas_password = <password>
|
||||
hitachi_hnas_evs_id = <evs_id>
|
||||
hitachi_hnas_evs_ip = <evs_ip>
|
||||
hitachi_hnas_file_system_name = FS-Manila2
|
||||
[hnas2]
|
||||
share_backend_name = HNAS2
|
||||
share_driver = manila.share.drivers.hitachi.hnas.driver.HitachiHNASDriver
|
||||
driver_handles_share_servers = False
|
||||
hitachi_hnas_ip = <hnas_ip>
|
||||
hitachi_hnas_user = <user>
|
||||
hitachi_hnas_password = <password>
|
||||
hitachi_hnas_evs_id = <evs_id>
|
||||
hitachi_hnas_evs_ip = <evs_ip>
|
||||
hitachi_hnas_file_system_name = FS-Manila2
|
||||
|
||||
.. end
|
||||
|
||||
For more information about how to manage shares, see the
|
||||
`Manage shares
|
||||
|
@ -1,8 +1,16 @@
|
||||
.. _networking-guide:
|
||||
|
||||
==========================
|
||||
===================
|
||||
Networking in Kolla
|
||||
===================
|
||||
|
||||
Kolla deploys Neutron by default as OpenStack networking component. This section
|
||||
describes configuring and running Neutron extensions like LBaaS, Networking-SFC,
|
||||
QoS, and so on.
|
||||
|
||||
Enabling Provider Networks
|
||||
==========================
|
||||
|
||||
Provider networks allow to connect compute instances directly to physical
|
||||
networks avoiding tunnels. This is necessary for example for some performance
|
||||
critical applications. Only administrators of OpenStack can create such
|
||||
@ -12,54 +20,52 @@ DVR mode networking. Normal tenant non-DVR networking does not need external
|
||||
bridge on compute hosts and therefore operators don't need additional
|
||||
dedicated network interface.
|
||||
|
||||
To enable provider networks modify the configuration
|
||||
file ``/etc/kolla/globals.yml``:
|
||||
To enable provider networks, modify the ``/etc/kolla/globals.yml`` file
|
||||
as the following example shows:
|
||||
|
||||
::
|
||||
.. code-block:: yaml
|
||||
|
||||
enable_neutron_provider_networks: "yes"
|
||||
enable_neutron_provider_networks: "yes"
|
||||
|
||||
.. end
|
||||
|
||||
===========================
|
||||
Enabling Neutron Extensions
|
||||
===========================
|
||||
|
||||
Overview
|
||||
========
|
||||
Kolla deploys Neutron by default as OpenStack networking component. This guide
|
||||
describes configuring and running Neutron extensions like LBaaS,
|
||||
Networking-SFC, QoS, etc.
|
||||
|
||||
Networking-SFC
|
||||
==============
|
||||
~~~~~~~~~~~~~~
|
||||
|
||||
Preparation and deployment
|
||||
--------------------------
|
||||
|
||||
Modify the configuration file ``/etc/kolla/globals.yml`` and change
|
||||
the following:
|
||||
Modify the ``/etc/kolla/globals.yml`` file as the following example shows:
|
||||
|
||||
::
|
||||
.. code-block:: yaml
|
||||
|
||||
enable_neutron_sfc: "yes"
|
||||
enable_neutron_sfc: "yes"
|
||||
|
||||
.. end
|
||||
|
||||
Verification
|
||||
------------
|
||||
|
||||
For setting up a testbed environment and creating a port chain, please refer
|
||||
to `networking-sfc documentation <https://docs.openstack.org/networking-sfc/latest/contributor/system_design_and_workflow.html>`_:
|
||||
to `networking-sfc documentation
|
||||
<https://docs.openstack.org/networking-sfc/latest/contributor/system_design_and_workflow.html>`__.
|
||||
|
||||
Neutron VPNaaS (VPN-as-a-Service)
|
||||
=================================
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Preparation and deployment
|
||||
--------------------------
|
||||
|
||||
Modify the configuration file ``/etc/kolla/globals.yml`` and change
|
||||
the following:
|
||||
Modify the ``/etc/kolla/globals.yml`` file as the following example shows:
|
||||
|
||||
::
|
||||
.. code-block:: yaml
|
||||
|
||||
enable_neutron_vpnaas: "yes"
|
||||
enable_neutron_vpnaas: "yes"
|
||||
|
||||
.. end
|
||||
|
||||
Verification
|
||||
------------
|
||||
@ -70,58 +76,60 @@ simple smoke test to verify the service is up and running.
|
||||
On the network node(s), the ``neutron_vpnaas_agent`` should be up (image naming
|
||||
and versioning may differ depending on deploy configuration):
|
||||
|
||||
::
|
||||
.. code-block:: console
|
||||
|
||||
docker ps --filter name=neutron_vpnaas_agent
|
||||
CONTAINER ID IMAGE
|
||||
COMMAND CREATED STATUS PORTS
|
||||
NAMES
|
||||
97d25657d55e
|
||||
operator:5000/kolla/oraclelinux-source-neutron-vpnaas-agent:4.0.0
|
||||
"kolla_start" 44 minutes ago Up 44 minutes
|
||||
neutron_vpnaas_agent
|
||||
# docker ps --filter name=neutron_vpnaas_agent
|
||||
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||
97d25657d55e operator:5000/kolla/oraclelinux-source-neutron-vpnaas-agent:4.0.0 "kolla_start" 44 minutes ago Up 44 minutes neutron_vpnaas_agent
|
||||
|
||||
.. end
|
||||
|
||||
Kolla-Ansible includes a small script that can be used in tandem with
|
||||
``tools/init-runonce`` to verify the VPN using two routers and two Nova VMs:
|
||||
|
||||
::
|
||||
.. code-block:: console
|
||||
|
||||
tools/init-runonce
|
||||
tools/init-vpn
|
||||
tools/init-runonce
|
||||
tools/init-vpn
|
||||
|
||||
.. end
|
||||
|
||||
Verify both VPN services are active:
|
||||
|
||||
::
|
||||
.. code-block:: console
|
||||
|
||||
neutron vpn-service-list
|
||||
+--------------------------------------+----------+--------------------------------------+--------+
|
||||
| id | name | router_id | status |
|
||||
+--------------------------------------+----------+--------------------------------------+--------+
|
||||
| ad941ec4-5f3d-4a30-aae2-1ab3f4347eb1 | vpn_west | 051f7ce3-4301-43cc-bfbd-7ffd59af539e | ACTIVE |
|
||||
| edce15db-696f-46d8-9bad-03d087f1f682 | vpn_east | 058842e0-1d01-4230-af8d-0ba6d0da8b1f | ACTIVE |
|
||||
+--------------------------------------+----------+--------------------------------------+--------+
|
||||
# neutron vpn-service-list
|
||||
|
||||
+--------------------------------------+----------+--------------------------------------+--------+
|
||||
| id | name | router_id | status |
|
||||
+--------------------------------------+----------+--------------------------------------+--------+
|
||||
| ad941ec4-5f3d-4a30-aae2-1ab3f4347eb1 | vpn_west | 051f7ce3-4301-43cc-bfbd-7ffd59af539e | ACTIVE |
|
||||
| edce15db-696f-46d8-9bad-03d087f1f682 | vpn_east | 058842e0-1d01-4230-af8d-0ba6d0da8b1f | ACTIVE |
|
||||
+--------------------------------------+----------+--------------------------------------+--------+
|
||||
|
||||
.. end
|
||||
|
||||
Two VMs can now be booted, one on vpn_east, the other on vpn_west, and
|
||||
encrypted ping packets observed being sent from one to the other.
|
||||
|
||||
For more information on this and VPNaaS in Neutron refer to the VPNaaS area on
|
||||
the OpenStack wiki:
|
||||
|
||||
https://wiki.openstack.org/wiki/Neutron/VPNaaS/HowToInstall
|
||||
https://wiki.openstack.org/wiki/Neutron/VPNaaS
|
||||
For more information on this and VPNaaS in Neutron refer to the
|
||||
`Neutron VPNaaS Testing <https://docs.openstack.org/neutron-vpnaas/latest/contributor/index.html#testing>`__
|
||||
and the `OpenStack wiki <https://wiki.openstack.org/wiki/Neutron/VPNaaS>`_.
|
||||
|
||||
Networking-ODL
|
||||
==============
|
||||
~~~~~~~~~~~~~~
|
||||
|
||||
Preparation and deployment
|
||||
--------------------------
|
||||
|
||||
Modify the configuration file ``/etc/kolla/globals.yml`` and enable
|
||||
the following:
|
||||
Modify the ``/etc/kolla/globals.yml`` file as the following example shows:
|
||||
|
||||
::
|
||||
.. code-block:: yaml
|
||||
|
||||
enable_opendaylight: "yes"
|
||||
enable_opendaylight: "yes"
|
||||
|
||||
.. end
|
||||
|
||||
Networking-ODL is an additional Neutron plugin that allows the OpenDaylight
|
||||
SDN Controller to utilize its networking virtualization features.
|
||||
@ -130,26 +138,33 @@ the ``neutron-server`` container. In this case, one could use the
|
||||
neutron-server-opendaylight container and the opendaylight container by
|
||||
pulling from Kolla dockerhub or by building them locally.
|
||||
|
||||
OpenDaylight globals.yml configurable options with their defaults include:
|
||||
::
|
||||
OpenDaylight ``globals.yml`` configurable options with their defaults include:
|
||||
|
||||
opendaylight_release: "0.6.1-Carbon"
|
||||
opendaylight_mechanism_driver: "opendaylight_v2"
|
||||
opendaylight_l3_service_plugin: "odl-router_v2"
|
||||
opendaylight_acl_impl: "learn"
|
||||
enable_opendaylight_qos: "no"
|
||||
enable_opendaylight_l3: "yes"
|
||||
enable_opendaylight_legacy_netvirt_conntrack: "no"
|
||||
opendaylight_port_binding_type: "pseudo-agentdb-binding"
|
||||
opendaylight_features: "odl-mdsal-apidocs,odl-netvirt-openstack"
|
||||
opendaylight_allowed_network_types: '"flat", "vlan", "vxlan"'
|
||||
.. code-block:: yaml
|
||||
|
||||
opendaylight_release: "0.6.1-Carbon"
|
||||
opendaylight_mechanism_driver: "opendaylight_v2"
|
||||
opendaylight_l3_service_plugin: "odl-router_v2"
|
||||
opendaylight_acl_impl: "learn"
|
||||
enable_opendaylight_qos: "no"
|
||||
enable_opendaylight_l3: "yes"
|
||||
enable_opendaylight_legacy_netvirt_conntrack: "no"
|
||||
opendaylight_port_binding_type: "pseudo-agentdb-binding"
|
||||
opendaylight_features: "odl-mdsal-apidocs,odl-netvirt-openstack"
|
||||
opendaylight_allowed_network_types: '"flat", "vlan", "vxlan"'
|
||||
|
||||
.. end
|
||||
|
||||
Clustered OpenDaylight Deploy
|
||||
-----------------------------
|
||||
|
||||
High availability clustered OpenDaylight requires modifying the inventory file
|
||||
and placing three or more hosts in the OpenDaylight or Networking groups.
|
||||
Note: The OpenDaylight role will allow deploy of one or three plus hosts for
|
||||
OpenDaylight/Networking role.
|
||||
|
||||
.. note::
|
||||
|
||||
The OpenDaylight role will allow deploy of one or three plus hosts for
|
||||
OpenDaylight/Networking role.
|
||||
|
||||
Verification
|
||||
------------
|
||||
@ -159,12 +174,10 @@ deployment will bring up an Opendaylight container in the list of running
|
||||
containers on network/opendaylight node.
|
||||
|
||||
For the source code, please refer to the following link:
|
||||
|
||||
https://github.com/openstack/networking-odl
|
||||
|
||||
https://github.com/openstack/networking-odl
|
||||
|
||||
OVS with DPDK
|
||||
=============
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
Introduction
|
||||
------------
|
||||
@ -205,36 +218,41 @@ it is advised to allocate them via the kernel command line instead to prevent
|
||||
memory fragmentation. This can be achieved by adding the following to the grub
|
||||
config and regenerating your grub file.
|
||||
|
||||
::
|
||||
.. code-block:: none
|
||||
|
||||
default_hugepagesz=2M hugepagesz=2M hugepages=25000
|
||||
default_hugepagesz=2M hugepagesz=2M hugepages=25000
|
||||
|
||||
.. end
|
||||
|
||||
As dpdk is a userspace networking library it requires userspace compatible
|
||||
drivers to be able to control the physical interfaces on the platform.
|
||||
dpdk technically support 3 kernel drivers igb_uio,uio_pci_generic and vfio_pci.
|
||||
While it is technically possible to use all 3 only uio_pci_generic and vfio_pci
|
||||
are recommended for use with kolla. igb_uio is BSD licenced and distributed
|
||||
as part of the dpdk library. While it has some advantages over uio_pci_generic
|
||||
loading the igb_uio module will taint the kernel and possibly invalidate
|
||||
distro support. To successfully deploy ovs-dpdk, vfio_pci or uio_pci_generic
|
||||
kernel module must be present on the platform. Most distros include vfio_pci
|
||||
or uio_pci_generic as part of the default kernel though on some distros you
|
||||
may need to install kernel-modules-extra or the distro equivalent prior to
|
||||
running kolla-ansible deploy.
|
||||
dpdk technically support 3 kernel drivers ``igb_uio``,``uio_pci_generic``, and
|
||||
``vfio_pci``.
|
||||
While it is technically possible to use all 3 only ``uio_pci_generic`` and
|
||||
``vfio_pci`` are recommended for use with kolla. ``igb_uio`` is BSD licenced
|
||||
and distributed as part of the dpdk library. While it has some advantages over
|
||||
``uio_pci_generic`` loading the ``igb_uio`` module will taint the kernel and
|
||||
possibly invalidate distro support. To successfully deploy ``ovs-dpdk``,
|
||||
``vfio_pci`` or ``uio_pci_generic`` kernel module must be present on the platform.
|
||||
Most distros include ``vfio_pci`` or ``uio_pci_generic`` as part of the default
|
||||
kernel though on some distros you may need to install ``kernel-modules-extra`` or
|
||||
the distro equivalent prior to running :command:`kolla-ansible deploy`.
|
||||
|
||||
Install
|
||||
-------
|
||||
Installation
|
||||
------------
|
||||
|
||||
To enable ovs-dpdk add the following to /etc/kolla/globals.yml
|
||||
To enable ovs-dpdk, add the following configuration to ``/etc/kolla/globals.yml``
|
||||
file:
|
||||
|
||||
::
|
||||
.. code-block:: yaml
|
||||
|
||||
ovs_datapath: "netdev"
|
||||
enable_ovs_dpdk: yes
|
||||
enable_openvswitch: yes
|
||||
tunnel_interface: "dpdk_bridge"
|
||||
neutron_bridge_name: "dpdk_bridge"
|
||||
ovs_datapath: "netdev"
|
||||
enable_ovs_dpdk: yes
|
||||
enable_openvswitch: yes
|
||||
tunnel_interface: "dpdk_bridge"
|
||||
neutron_bridge_name: "dpdk_bridge"
|
||||
|
||||
.. end
|
||||
|
||||
Unlike standard Open vSwitch deployments, the interface specified by
|
||||
neutron_external_interface should have an ip address assigned.
|
||||
@ -272,161 +290,192 @@ prior to upgrading.
|
||||
On ubuntu network manager is required for tunnel networking.
|
||||
This requirement will be removed in the future.
|
||||
|
||||
|
||||
Neutron SRIOV
|
||||
=============
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
Preparation and deployment
|
||||
--------------------------
|
||||
|
||||
SRIOV requires specific NIC and BIOS configuration and is not supported on all
|
||||
platforms. Consult NIC and platform specific documentation for instructions
|
||||
platforms. Consult NIC and platform specific documentation for instructions
|
||||
on enablement.
|
||||
|
||||
Modify the configuration file ``/etc/kolla/globals.yml``:
|
||||
Modify the ``/etc/kolla/globals.yml`` file as the following example shows:
|
||||
|
||||
::
|
||||
.. code-block:: yaml
|
||||
|
||||
enable_neutron_sriov: "yes"
|
||||
enable_neutron_sriov: "yes"
|
||||
|
||||
Modify the file ``/etc/kolla/config/neutron/ml2_conf.ini``. Add ``sriovnicswitch``
|
||||
to the mechanism drivers and add the provider networks for use by SRIOV. Both
|
||||
flat and VLAN are configured with the same physical network name in this example:
|
||||
.. end
|
||||
|
||||
::
|
||||
Modify the ``/etc/kolla/config/neutron/ml2_conf.ini`` file and add ``sriovnicswitch``
|
||||
to the ``mechanism_drivers``. Also, the provider networks used by SRIOV should be configured.
|
||||
Both flat and VLAN are configured with the same physical network name in this example:
|
||||
|
||||
[ml2]
|
||||
mechanism_drivers = openvswitch,l2population,sriovnicswitch
|
||||
.. path /etc/kolla/config/neutron/ml2_conf.ini
|
||||
.. code-block:: ini
|
||||
|
||||
[ml2_type_vlan]
|
||||
network_vlan_ranges = sriovtenant1:1000:1009
|
||||
[ml2]
|
||||
mechanism_drivers = openvswitch,l2population,sriovnicswitch
|
||||
|
||||
[ml2_type_flat]
|
||||
flat_networks = sriovtenant1
|
||||
[ml2_type_vlan]
|
||||
network_vlan_ranges = sriovtenant1:1000:1009
|
||||
|
||||
[ml2_type_flat]
|
||||
flat_networks = sriovtenant1
|
||||
|
||||
Modify the file ``/etc/kolla/config/nova.conf``. The Nova Scheduler service
|
||||
on the control node requires the ``PciPassthroughFilter`` to be added to the
|
||||
list of filters and the Nova Compute service(s) on the compute node(s) need
|
||||
PCI device whitelisting:
|
||||
.. end
|
||||
|
||||
::
|
||||
Add ``PciPassthroughFilter`` to scheduler_default_filters
|
||||
|
||||
[DEFAULT]
|
||||
scheduler_default_filters = <existing filters>, PciPassthroughFilter
|
||||
scheduler_available_filters = nova.scheduler.filters.all_filters
|
||||
The ``PciPassthroughFilter``, which is required by Nova Scheduler service
|
||||
on the Controller, should be added to ``scheduler_default_filters``
|
||||
|
||||
[pci]
|
||||
passthrough_whitelist = [{"devname": "ens785f0", "physical_network": "sriovtenant1"}]
|
||||
Modify the ``/etc/kolla/config/nova.conf`` file and add ``PciPassthroughFilter``
|
||||
to ``scheduler_default_filters``. this filter is required by The Nova Scheduler
|
||||
service on the controller node.
|
||||
|
||||
.. path /etc/kolla/config/nova.conf
|
||||
.. code-block:: ini
|
||||
|
||||
Modify the file ``/etc/kolla/config/neutron/sriov_agent.ini``. Add physical
|
||||
network to interface mapping. Specific VFs can also be excluded here. Leave
|
||||
blank to enable all VFs for the interface:
|
||||
[DEFAULT]
|
||||
scheduler_default_filters = <existing filters>, PciPassthroughFilter
|
||||
scheduler_available_filters = nova.scheduler.filters.all_filters
|
||||
|
||||
::
|
||||
.. end
|
||||
|
||||
[sriov_nic]
|
||||
physical_device_mappings = sriovtenant1:ens785f0
|
||||
exclude_devices =
|
||||
Edit the ``/etc/kolla/config/nova.conf`` file and add PCI device whitelisting.
|
||||
this is needed by OpenStack Compute service(s) on the Compute.
|
||||
|
||||
.. path /etc/kolla/config/nova.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[pci]
|
||||
passthrough_whitelist = [{"devname": "ens785f0", "physical_network": "sriovtenant1"}]
|
||||
|
||||
.. end
|
||||
|
||||
Modify the ``/etc/kolla/config/neutron/sriov_agent.ini`` file. Add physical
|
||||
network to interface mapping. Specific VFs can also be excluded here. Leaving
|
||||
blank means to enable all VFs for the interface:
|
||||
|
||||
.. path /etc/kolla/config/neutron/sriov_agent.ini
|
||||
.. code-block:: ini
|
||||
|
||||
[sriov_nic]
|
||||
physical_device_mappings = sriovtenant1:ens785f0
|
||||
exclude_devices =
|
||||
|
||||
.. end
|
||||
|
||||
Run deployment.
|
||||
|
||||
Verification
|
||||
------------
|
||||
|
||||
Check that VFs were created on the compute node(s). VFs will appear in the
|
||||
Check that VFs were created on the compute node(s). VFs will appear in the
|
||||
output of both ``lspci`` and ``ip link show``. For example:
|
||||
|
||||
::
|
||||
.. code-block:: console
|
||||
|
||||
lspci | grep net
|
||||
05:10.0 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
|
||||
# lspci | grep net
|
||||
05:10.0 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
|
||||
|
||||
::
|
||||
|
||||
ip -d link show ens785f0
|
||||
4: ens785f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UP mode DEFAULT qlen 1000
|
||||
link/ether 90:e2:ba:ba:fb:20 brd ff:ff:ff:ff:ff:ff promiscuity 1
|
||||
openvswitch_slave addrgenmode eui64
|
||||
vf 0 MAC 52:54:00:36:57:e0, spoof checking on, link-state auto, trust off
|
||||
vf 1 MAC 52:54:00:00:62:db, spoof checking on, link-state auto, trust off
|
||||
vf 2 MAC fa:16:3e:92:cf:12, spoof checking on, link-state auto, trust off
|
||||
vf 3 MAC fa:16:3e:00:a3:01, vlan 1000, spoof checking on, link-state auto, trust off
|
||||
# ip -d link show ens785f0
|
||||
4: ens785f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UP mode DEFAULT qlen 1000
|
||||
link/ether 90:e2:ba:ba:fb:20 brd ff:ff:ff:ff:ff:ff promiscuity 1
|
||||
openvswitch_slave addrgenmode eui64
|
||||
vf 0 MAC 52:54:00:36:57:e0, spoof checking on, link-state auto, trust off
|
||||
vf 1 MAC 52:54:00:00:62:db, spoof checking on, link-state auto, trust off
|
||||
vf 2 MAC fa:16:3e:92:cf:12, spoof checking on, link-state auto, trust off
|
||||
vf 3 MAC fa:16:3e:00:a3:01, vlan 1000, spoof checking on, link-state auto, trust off
|
||||
|
||||
.. end
|
||||
|
||||
Verify the SRIOV Agent container is running on the compute node(s):
|
||||
|
||||
::
|
||||
.. code-block:: console
|
||||
|
||||
docker ps --filter name=neutron_sriov_agent
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||
b03a8f4c0b80 10.10.10.10:4000/registry/centos-source-neutron-sriov-agent:17.04.0 "kolla_start" 18 minutes ago Up 18 minutes neutron_sriov_agent
|
||||
# docker ps --filter name=neutron_sriov_agent
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||
b03a8f4c0b80 10.10.10.10:4000/registry/centos-source-neutron-sriov-agent:17.04.0 "kolla_start" 18 minutes ago Up 18 minutes neutron_sriov_agent
|
||||
|
||||
.. end
|
||||
|
||||
Verify the SRIOV Agent service is present and UP:
|
||||
|
||||
::
|
||||
.. code-block:: console
|
||||
|
||||
openstack network agent list
|
||||
+--------------------------------------+--------------------+-------------+-------------------+-------+-------+---------------------------+
|
||||
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
|
||||
+--------------------------------------+--------------------+-------------+-------------------+-------+-------+---------------------------+
|
||||
| 7c06bda9-7b87-487e-a645-cc6c289d9082 | NIC Switch agent | av09-18-wcp | None | :-) | UP | neutron-sriov-nic-agent |
|
||||
+--------------------------------------+--------------------+-------------+-------------------+-------+-------+---------------------------+
|
||||
# openstack network agent list
|
||||
|
||||
Create a new provider network. Set ``provider-physical-network`` to the
|
||||
+--------------------------------------+--------------------+-------------+-------------------+-------+-------+---------------------------+
|
||||
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
|
||||
+--------------------------------------+--------------------+-------------+-------------------+-------+-------+---------------------------+
|
||||
| 7c06bda9-7b87-487e-a645-cc6c289d9082 | NIC Switch agent | av09-18-wcp | None | :-) | UP | neutron-sriov-nic-agent |
|
||||
+--------------------------------------+--------------------+-------------+-------------------+-------+-------+---------------------------+
|
||||
|
||||
.. end
|
||||
|
||||
Create a new provider network. Set ``provider-physical-network`` to the
|
||||
physical network name that was configured in ``/etc/kolla/config/nova.conf``.
|
||||
Set ``provider-network-type`` to the desired type. If using VLAN, ensure
|
||||
``provider-segment`` is set to the correct VLAN ID. Type VLAN is used in this example:
|
||||
Set ``provider-network-type`` to the desired type. If using VLAN, ensure
|
||||
``provider-segment`` is set to the correct VLAN ID. This example uses ``VLAN``
|
||||
network type:
|
||||
|
||||
|
||||
::
|
||||
.. code-block:: console
|
||||
|
||||
openstack network create --project=admin \
|
||||
--provider-network-type=vlan \
|
||||
--provider-physical-network=sriovtenant1 \
|
||||
--provider-segment=1000 \
|
||||
sriovnet1
|
||||
# openstack network create --project=admin \
|
||||
--provider-network-type=vlan \
|
||||
--provider-physical-network=sriovtenant1 \
|
||||
--provider-segment=1000 \
|
||||
sriovnet1
|
||||
|
||||
Create a subnet with a DHCP range for the provider network:
|
||||
|
||||
::
|
||||
.. code-block:: console
|
||||
|
||||
openstack subnet create --network=sriovnet1 \
|
||||
--subnet-range=11.0.0.0/24 \
|
||||
--allocation-pool start=11.0.0.5,end=11.0.0.100 \
|
||||
sriovnet1_sub1
|
||||
# openstack subnet create --network=sriovnet1 \
|
||||
--subnet-range=11.0.0.0/24 \
|
||||
--allocation-pool start=11.0.0.5,end=11.0.0.100 \
|
||||
sriovnet1_sub1
|
||||
|
||||
Create a port on the provider network with vnic_type set to direct:
|
||||
.. end
|
||||
|
||||
::
|
||||
Create a port on the provider network with ``vnic_type`` set to ``direct``:
|
||||
|
||||
openstack port create --network sriovnet1 --vnic-type=direct sriovnet1-port1
|
||||
.. code-block:: console
|
||||
|
||||
# openstack port create --network sriovnet1 --vnic-type=direct sriovnet1-port1
|
||||
|
||||
.. end
|
||||
|
||||
Start a new instance with the SRIOV port assigned:
|
||||
|
||||
::
|
||||
.. code-block:: console
|
||||
|
||||
openstack server create --flavor flavor1 \
|
||||
--image fc-26 \
|
||||
--nic port-id=`openstack port list | grep sriovnet1-port1 | awk '{print $2}'` \
|
||||
vm1
|
||||
# openstack server create --flavor flavor1 \
|
||||
--image fc-26 \
|
||||
--nic port-id=`openstack port list | grep sriovnet1-port1 | awk '{print $2}'` \
|
||||
vm1
|
||||
|
||||
Verify the instance boots with the SRIOV port. Verify VF assignment by running
|
||||
Verify the instance boots with the SRIOV port. Verify VF assignment by running
|
||||
dmesg on the compute node where the instance was placed.
|
||||
|
||||
::
|
||||
.. code-block:: console
|
||||
|
||||
dmesg
|
||||
[ 2896.849970] ixgbe 0000:05:00.0: setting MAC fa:16:3e:00:a3:01 on VF 3
|
||||
[ 2896.850028] ixgbe 0000:05:00.0: Setting VLAN 1000, QOS 0x0 on VF 3
|
||||
[ 2897.403367] vfio-pci 0000:05:10.4: enabling device (0000 -> 0002)
|
||||
# dmesg
|
||||
[ 2896.849970] ixgbe 0000:05:00.0: setting MAC fa:16:3e:00:a3:01 on VF 3
|
||||
[ 2896.850028] ixgbe 0000:05:00.0: Setting VLAN 1000, QOS 0x0 on VF 3
|
||||
[ 2897.403367] vfio-pci 0000:05:10.4: enabling device (0000 -> 0002)
|
||||
|
||||
.. end
|
||||
|
||||
For more information see `OpenStack SRIOV documentation <https://docs.openstack.org/neutron/pike/admin/config-sriov.html>`_.
|
||||
|
||||
|
||||
Nova SRIOV
|
||||
==========
|
||||
~~~~~~~~~~
|
||||
|
||||
Preparation and deployment
|
||||
--------------------------
|
||||
@ -447,15 +496,18 @@ Compute service on the compute node also require the ``alias`` option under the
|
||||
``[pci]`` section. The alias can be configured as 'type-VF' to pass VFs or 'type-PF'
|
||||
to pass the PF. Type-VF is shown in this example:
|
||||
|
||||
::
|
||||
.. path /etc/kolla/config/nova.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
scheduler_default_filters = <existing filters>, PciPassthroughFilter
|
||||
scheduler_available_filters = nova.scheduler.filters.all_filters
|
||||
[DEFAULT]
|
||||
scheduler_default_filters = <existing filters>, PciPassthroughFilter
|
||||
scheduler_available_filters = nova.scheduler.filters.all_filters
|
||||
|
||||
[pci]
|
||||
passthrough_whitelist = [{"vendor_id": "8086", "product_id": "10fb"}]
|
||||
alias = [{"vendor_id":"8086", "product_id":"10ed", "device_type":"type-VF", "name":"vf1"}]
|
||||
[pci]
|
||||
passthrough_whitelist = [{"vendor_id": "8086", "product_id": "10fb"}]
|
||||
alias = [{"vendor_id":"8086", "product_id":"10ed", "device_type":"type-VF", "name":"vf1"}]
|
||||
|
||||
.. end
|
||||
|
||||
Run deployment.
|
||||
|
||||
@ -465,16 +517,19 @@ Verification
|
||||
Create (or use an existing) flavor, and then configure it to request one PCI device
|
||||
from the PCI alias:
|
||||
|
||||
::
|
||||
.. code-block:: console
|
||||
|
||||
openstack flavor set sriov-flavor --property "pci_passthrough:alias"="vf1:1"
|
||||
# openstack flavor set sriov-flavor --property "pci_passthrough:alias"="vf1:1"
|
||||
|
||||
.. end
|
||||
|
||||
Start a new instance using the flavor:
|
||||
|
||||
::
|
||||
.. code-block:: console
|
||||
|
||||
openstack server create --flavor sriov-flavor --image fc-26 vm2
|
||||
# openstack server create --flavor sriov-flavor --image fc-26 vm2
|
||||
|
||||
.. end
|
||||
|
||||
Verify VF devices were created and the instance starts successfully as in
|
||||
the Neutron SRIOV case.
|
||||
|
@ -5,33 +5,35 @@ Nova Fake Driver
|
||||
================
|
||||
|
||||
One common question from OpenStack operators is that "how does the control
|
||||
plane (e.g., database, messaging queue, nova-scheduler ) scales?". To answer
|
||||
plane (for example, database, messaging queue, nova-scheduler ) scales?". To answer
|
||||
this question, operators setup Rally to drive workload to the OpenStack cloud.
|
||||
However, without a large number of nova-compute nodes, it becomes difficult to
|
||||
exercise the control performance.
|
||||
|
||||
Given the built-in feature of Docker container, Kolla enables standing up many
|
||||
of nova-compute nodes with nova fake driver on a single host. For example,
|
||||
of Compute nodes with nova fake driver on a single host. For example,
|
||||
we can create 100 nova-compute containers on a real host to simulate the
|
||||
100-hypervisor workload to the nova-conductor and the messaging queue.
|
||||
100-hypervisor workload to the ``nova-conductor`` and the messaging queue.
|
||||
|
||||
Use nova-fake driver
|
||||
====================
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Nova fake driver can not work with all-in-one deployment. This is because the
|
||||
fake neutron-openvswitch-agent for the fake nova-compute container conflicts
|
||||
with neutron-openvswitch-agent on the compute nodes. Therefore, in the
|
||||
inventory the network node must be different than the compute node.
|
||||
fake ``neutron-openvswitch-agent`` for the fake ``nova-compute`` container conflicts
|
||||
with ``neutron-openvswitch-agent`` on the Compute nodes. Therefore, in the
|
||||
inventory the network node must be different than the Compute node.
|
||||
|
||||
By default, Kolla uses libvirt driver on the compute node. To use nova-fake
|
||||
By default, Kolla uses libvirt driver on the Compute node. To use nova-fake
|
||||
driver, edit the following parameters in ``/etc/kolla/globals.yml`` or in
|
||||
the command line options.
|
||||
|
||||
::
|
||||
.. code-block:: yaml
|
||||
|
||||
enable_nova_fake: "yes"
|
||||
num_nova_fake_per_node: 5
|
||||
enable_nova_fake: "yes"
|
||||
num_nova_fake_per_node: 5
|
||||
|
||||
Each compute node will run 5 nova-compute containers and 5
|
||||
neutron-plugin-agent containers. When booting instance, there will be no real
|
||||
instances created. But *nova list* shows the fake instances.
|
||||
.. end
|
||||
|
||||
Each Compute node will run 5 ``nova-compute`` containers and 5
|
||||
``neutron-plugin-agent`` containers. When booting instance, there will be no real
|
||||
instances created. But :command:`nova list` shows the fake instances.
|
||||
|
@ -5,7 +5,8 @@ OSprofiler in Kolla
|
||||
===================
|
||||
|
||||
Overview
|
||||
========
|
||||
~~~~~~~~
|
||||
|
||||
OSProfiler provides a tiny but powerful library that is used by most
|
||||
(soon to be all) OpenStack projects and their corresponding python clients
|
||||
as well as the Openstack client.
|
||||
@ -17,12 +18,14 @@ to build a tree of calls which can be quite handy for a variety of reasons
|
||||
Configuration on Kolla deployment
|
||||
---------------------------------
|
||||
|
||||
Enable OSprofiler in ``/etc/kolla/globals.yml``
|
||||
Enable ``OSprofiler`` in ``/etc/kolla/globals.yml`` file:
|
||||
|
||||
.. code-block:: console
|
||||
.. code-block:: yaml
|
||||
|
||||
enable_osprofiler: "yes"
|
||||
enable_elasticsearch: "yes"
|
||||
enable_osprofiler: "yes"
|
||||
enable_elasticsearch: "yes"
|
||||
|
||||
.. end
|
||||
|
||||
Verify operation
|
||||
----------------
|
||||
@ -32,25 +35,24 @@ Retrieve ``osprofiler_secret`` key present at ``/etc/kolla/passwords.yml``.
|
||||
Profiler UUIDs can be created executing OpenStack clients (Nova, Glance,
|
||||
Cinder, Heat, Keystone) with ``--profile`` option or using the official
|
||||
Openstack client with ``--os-profile``. In example to get the OSprofiler trace
|
||||
UUID for ``openstack server create``.
|
||||
UUID for :command:`openstack server create` command.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack --os-profile <OSPROFILER_SECRET> \
|
||||
server create \
|
||||
--image cirros \
|
||||
--flavor m1.tiny \
|
||||
--key-name mykey \
|
||||
--nic net-id=${NETWORK_ID} \
|
||||
demo
|
||||
$ openstack --os-profile <OSPROFILER_SECRET> server create \
|
||||
--image cirros --flavor m1.tiny --key-name mykey \
|
||||
--nic net-id=${NETWORK_ID} demo
|
||||
|
||||
.. end
|
||||
|
||||
The previous command will output the command to retrieve OSprofiler trace.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ osprofiler trace show --html <TRACE_ID> --connection-string \
|
||||
elasticsearch://<api_interface_address>:9200
|
||||
$ osprofiler trace show --html <TRACE_ID> --connection-string \
|
||||
elasticsearch://<api_interface_address>:9200
|
||||
|
||||
.. end
|
||||
|
||||
For more information about how OSprofiler works, see
|
||||
`OSProfiler – Cross-project profiling library
|
||||
|
Loading…
Reference in New Issue
Block a user