Upgrade the rst convention of the Reference Guide [3]

We upgrade the rst convention by following Documentation Contributor
Guide[1].

[1] https://docs.openstack.org/doc-contrib-guide

Change-Id: Id480cd24f5eed810e81af0f12e84a4a6db49247d
Partially-Implements: blueprint optimize-the-documentation-format
This commit is contained in:
chenxing 2018-03-17 22:21:42 +08:00 committed by Chason Chan
parent b42b1361ee
commit 0002de177e
5 changed files with 654 additions and 521 deletions

View File

@ -5,7 +5,7 @@ Manila in Kolla
===============
Overview
========
~~~~~~~~
Currently, Kolla can deploy following manila services:
* manila-api
@ -19,7 +19,7 @@ management of share types as well as share snapshots if a driver supports
them.
Important
=========
~~~~~~~~~
For simplicity, this guide describes configuring the Shared File Systems
service to use the ``generic`` back end with the driver handles share
@ -32,7 +32,7 @@ Before you proceed, ensure that Compute, Networking and Block storage
services are properly working.
Preparation and Deployment
==========================
~~~~~~~~~~~~~~~~~~~~~~~~~~
Cinder and Ceph are required, enable it in ``/etc/kolla/globals.yml``:
@ -41,6 +41,8 @@ Cinder and Ceph are required, enable it in ``/etc/kolla/globals.yml``:
enable_cinder: "yes"
enable_ceph: "yes"
.. end
Enable Manila and generic back end in ``/etc/kolla/globals.yml``:
.. code-block:: console
@ -48,6 +50,8 @@ Enable Manila and generic back end in ``/etc/kolla/globals.yml``:
enable_manila: "yes"
enable_manila_backend_generic: "yes"
.. end
By default Manila uses instance flavor id 100 for its file systems. For Manila
to work, either create a new nova flavor with id 100 (use *nova flavor-create*)
or change *service_instance_flavor_id* to use one of the default nova flavor
@ -62,8 +66,10 @@ contents:
[generic]
service_instance_flavor_id = 2
.. end
Verify Operation
================
~~~~~~~~~~~~~~~~
Verify operation of the Shared File Systems service. List service components
to verify successful launch of each process:
@ -71,6 +77,7 @@ to verify successful launch of each process:
.. code-block:: console
# manila service-list
+------------------+----------------+------+---------+-------+----------------------------+-----------------+
| Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+------------------+----------------+------+---------+-------+----------------------------+-----------------+
@ -78,8 +85,10 @@ to verify successful launch of each process:
| manila-share | share1@generic | nova | enabled | up | 2014-10-18T01:30:57.000000 | None |
+------------------+----------------+------+---------+-------+----------------------------+-----------------+
.. end
Launch an Instance
==================
~~~~~~~~~~~~~~~~~~
Before being able to create a share, the manila with the generic driver and the
DHSS mode enabled requires the definition of at least an image, a network and a
@ -88,19 +97,22 @@ configuration, the share server is an instance where NFS/CIFS shares are
served.
Determine the configuration of the share server
===============================================
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Create a default share type before running manila-share service:
.. code-block:: console
# manila type-create default_share_type True
+--------------------------------------+--------------------+------------+------------+-------------------------------------+-------------------------+
| ID | Name | Visibility | is_default | required_extra_specs | optional_extra_specs |
+--------------------------------------+--------------------+------------+------------+-------------------------------------+-------------------------+
| 8a35da28-0f74-490d-afff-23664ecd4f01 | default_share_type | public | - | driver_handles_share_servers : True | snapshot_support : True |
+--------------------------------------+--------------------+------------+------------+-------------------------------------+-------------------------+
.. end
Create a manila share server image to the Image service:
.. code-block:: console
@ -110,6 +122,7 @@ Create a manila share server image to the Image service:
--file manila-service-image-master.qcow2 \
--disk-format qcow2 --container-format bare \
--visibility public --progress
[=============================>] 100%
+------------------+--------------------------------------+
| Property | Value |
@ -132,6 +145,8 @@ Create a manila share server image to the Image service:
| visibility | public |
+------------------+--------------------------------------+
.. end
List available networks to get id and subnets of the private network:
.. code-block:: console
@ -143,6 +158,8 @@ List available networks to get id and subnets of the private network:
| 7c6f9b37-76b4-463e-98d8-27e5686ed083 | private | 3482f524-8bff-4871-80d4-5774c2730728 172.16.1.0/24 |
+--------------------------------------+---------+----------------------------------------------------+
.. end
Create a shared network
.. code-block:: console
@ -150,6 +167,7 @@ Create a shared network
# manila share-network-create --name demo-share-network1 \
--neutron-net-id PRIVATE_NETWORK_ID \
--neutron-subnet-id PRIVATE_NETWORK_SUBNET_ID
+-------------------+--------------------------------------+
| Property | Value |
+-------------------+--------------------------------------+
@ -168,6 +186,8 @@ Create a shared network
| description | None |
+-------------------+--------------------------------------+
.. end
Create a flavor (**Required** if you not defined *manila_instance_flavor_id* in
``/etc/kolla/config/manila-share.conf`` file)
@ -175,14 +195,17 @@ Create a flavor (**Required** if you not defined *manila_instance_flavor_id* in
# nova flavor-create manila-service-flavor 100 128 0 1
.. end
Create a share
==============
~~~~~~~~~~~~~~
Create a NFS share using the share network:
.. code-block:: console
# manila create NFS 1 --name demo-share1 --share-network demo-share-network1
+-----------------------------+--------------------------------------+
| Property | Value |
+-----------------------------+--------------------------------------+
@ -210,18 +233,23 @@ Create a NFS share using the share network:
| metadata | {} |
+-----------------------------+--------------------------------------+
.. end
After some time, the share status should change from ``creating``
to ``available``:
.. code-block:: console
# manila list
+--------------------------------------+-------------+------+-------------+-----------+-----------+--------------------------------------+-----------------------------+-------------------+
| ID | Name | Size | Share Proto | Status | Is Public | Share Type Name | Host | Availability Zone |
+--------------------------------------+-------------+------+-------------+-----------+-----------+--------------------------------------+-----------------------------+-------------------+
| e1e06b14-ba17-48d4-9e0b-ca4d59823166 | demo-share1 | 1 | NFS | available | False | default_share_type | share1@generic#GENERIC | nova |
+--------------------------------------+-------------+------+-------------+-----------+-----------+--------------------------------------+-----------------------------+-------------------+
.. end
Configure user access to the new share before attempting to mount it via the
network:
@ -229,14 +257,17 @@ network:
# manila access-allow demo-share1 ip INSTANCE_PRIVATE_NETWORK_IP
.. end
Mount the share from an instance
================================
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Get export location from share
.. code-block:: console
# manila show demo-share1
+-----------------------------+----------------------------------------------------------------------+
| Property | Value |
+-----------------------------+----------------------------------------------------------------------+
@ -272,6 +303,7 @@ Get export location from share
| metadata | {} |
+-----------------------------+----------------------------------------------------------------------+
.. end
Create a folder where the mount will be placed:
@ -279,14 +311,18 @@ Create a folder where the mount will be placed:
# mkdir ~/test_folder
.. end
Mount the NFS share in the instance using the export location of the share:
.. code-block:: console
# mount -v 10.254.0.3:/shares/share-422dc546-8f37-472b-ac3c-d23fe410d1b6 ~/test_folder
.. end
Share Migration
===============
~~~~~~~~~~~~~~~
As administrator, you can migrate a share with its data from one location to
another in a manner that is transparent to users and workloads. You can use
@ -297,11 +333,13 @@ provider network for ``data_node_access_ip``.
Modify the file ``/etc/kolla/config/manila.conf`` and add the contents:
.. code-block:: console
.. path /etc/kolla/config/manila.conf
.. code-block:: ini
[DEFAULT]
data_node_access_ip = 10.10.10.199
.. end
.. note::
@ -312,11 +350,13 @@ Use the manila migration command, as shown in the following example:
.. code-block:: console
manila migration-start --preserve-metadata True|False \
# manila migration-start --preserve-metadata True|False \
--writable True|False --force_host_assisted_migration True|False \
--new_share_type share_type --new_share_network share_network \
shareID destinationHost
.. end
- ``--force-host-copy``: Forces the generic host-based migration mechanism and
bypasses any driver optimizations.
- ``destinationHost``: Is in this format ``host#pool`` which includes
@ -328,11 +368,12 @@ Use the manila migration command, as shown in the following example:
Checking share migration progress
---------------------------------
Use the ``manila migration-get-progress shareID`` command to check progress.
Use the :command:`manila migration-get-progress shareID` command to check progress.
.. code-block:: console
manila migration-get-progress demo-share1
# manila migration-get-progress demo-share1
+----------------+-----------------------+
| Property | Value |
+----------------+-----------------------+
@ -340,7 +381,7 @@ Use the ``manila migration-get-progress shareID`` command to check progress.
| total_progress | 0 |
+----------------+-----------------------+
manila migration-get-progress demo-share1
# manila migration-get-progress demo-share1
+----------------+-------------------------+
| Property | Value |
+----------------+-------------------------+
@ -348,8 +389,10 @@ Use the ``manila migration-get-progress shareID`` command to check progress.
| total_progress | 100 |
+----------------+-------------------------+
Use the ``manila migration-complete shareID`` command to complete share
migration process
.. end
Use the :command:`manila migration-complete shareID` command to complete share
migration process.
For more information about how to manage shares, see the
`Manage shares

View File

@ -5,7 +5,7 @@ Hitachi NAS Platform File Services Driver for OpenStack
========================================================
Overview
========
~~~~~~~~
The Hitachi NAS Platform File Services Driver for OpenStack
provides NFS Shared File Systems to OpenStack.
@ -54,7 +54,7 @@ The following operations are supported:
Preparation and Deployment
==========================
~~~~~~~~~~~~~~~~~~~~~~~~~~
.. note::
@ -75,11 +75,13 @@ Configuration on Kolla deployment
Enable Shared File Systems service and HNAS driver in
``/etc/kolla/globals.yml``
.. code-block:: console
.. code-block:: yaml
enable_manila: "yes"
enable_manila_backend_hnas: "yes"
.. end
Configure the OpenStack networking so it can reach HNAS Management
interface and HNAS EVS Data interface.
@ -88,23 +90,24 @@ ports eth1 and eth2 associated respectively:
In ``/etc/kolla/globals.yml`` set:
.. code-block:: console
.. code-block:: yaml
neutron_bridge_name: "br-ex,br-ex2"
neutron_external_interface: "eth1,eth2"
.. end
.. note::
eth1: Neutron external interface.
eth2: HNAS EVS data interface.
``eth1`` is used to Neutron external interface and ``eth2`` is
used to HNAS EVS data interface.
HNAS back end configuration
---------------------------
In ``/etc/kolla/globals.yml`` uncomment and set:
.. code-block:: console
.. code-block:: yaml
hnas_ip: "172.24.44.15"
hnas_user: "supervisor"
@ -113,7 +116,6 @@ In ``/etc/kolla/globals.yml`` uncomment and set:
hnas_evs_ip: "10.0.1.20"
hnas_file_system_name: "FS-Manila"
Configuration on HNAS
---------------------
@ -125,6 +127,8 @@ List the available tenants:
$ openstack project list
.. end
Create a network to the given tenant (service), providing the tenant ID,
a name for the network, the name of the physical network over which the
virtual network is implemented, and the type of the physical mechanism by
@ -135,12 +139,16 @@ which the virtual network is implemented:
$ neutron net-create --tenant-id <SERVICE_ID> hnas_network \
--provider:physical_network=physnet2 --provider:network_type=flat
.. end
*Optional* - List available networks:
.. code-block:: console
$ neutron net-list
.. end
Create a subnet to the same tenant (service), the gateway IP of this subnet,
a name for the subnet, the network ID created before, and the CIDR of
subnet:
@ -150,12 +158,16 @@ subnet:
$ neutron subnet-create --tenant-id <SERVICE_ID> --gateway <GATEWAY> \
--name hnas_subnet <NETWORK_ID> <SUBNET_CIDR>
.. end
*Optional* - List available subnets:
.. code-block:: console
$ neutron subnet-list
.. end
Add the subnet interface to a router, providing the router ID and subnet
ID created before:
@ -163,6 +175,8 @@ ID created before:
$ neutron router-interface-add <ROUTER_ID> <SUBNET_ID>
.. end
Create a file system on HNAS. See the `Hitachi HNAS reference <http://www.hds.com/assets/pdf/hus-file-module-file-services-administration-guide.pdf>`_.
.. important ::
@ -179,6 +193,8 @@ Create a route in HNAS to the tenant network:
$ console-context --evs <EVS_ID_IN_USE> route-net-add --gateway <FLAT_NETWORK_GATEWAY> \
<TENANT_PRIVATE_NETWORK>
.. end
.. important ::
Make sure multi-tenancy is enabled and routes are configured per EVS.
@ -188,9 +204,10 @@ Create a route in HNAS to the tenant network:
$ console-context --evs 3 route-net-add --gateway 192.168.1.1 \
10.0.0.0/24
.. end
Create a share
==============
~~~~~~~~~~~~~~
Create a default share type before running manila-share service:
@ -204,16 +221,20 @@ Create a default share type before running manila-share service:
| 3e54c8a2-1e50-455e-89a0-96bb52876c35 | default_share_hitachi | public | - | driver_handles_share_servers : False | snapshot_support : True |
+--------------------------------------+-----------------------+------------+------------+--------------------------------------+-------------------------+
.. end
Create a NFS share using the HNAS back end:
.. code-block:: console
manila create NFS 1 \
$ manila create NFS 1 \
--name mysharehnas \
--description "My Manila share" \
--share-type default_share_hitachi
Verify Operation
.. end
Verify Operation:
.. code-block:: console
@ -225,6 +246,8 @@ Verify Operation
| 721c0a6d-eea6-41af-8c10-72cd98985203 | mysharehnas | 1 | NFS | available | False | default_share_hitachi | control@hnas1#HNAS1 | nova |
+--------------------------------------+----------------+------+-------------+-----------+-----------+-----------------------+-------------------------+-------------------+
.. end
.. code-block:: console
$ manila show mysharehnas
@ -265,10 +288,12 @@ Verify Operation
| metadata | {} |
+-----------------------------+-----------------------------------------------------------------+
.. end
.. _hnas_configure_multiple_back_ends:
Configure multiple back ends
============================
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
An administrator can configure an instance of Manila to provision shares from
one or more back ends. Each back end leverages an instance of a vendor-specific
@ -283,14 +308,18 @@ the default share backends before deployment.
Modify the file ``/etc/kolla/config/manila.conf`` and add the contents:
.. code-block:: console
.. path /etc/kolla/config/manila.conf
.. code-block:: ini
[DEFAULT]
enabled_share_backends = generic,hnas1,hnas2
.. end
Modify the file ``/etc/kolla/config/manila-share.conf`` and add the contents:
.. code-block:: console
.. path /etc/kolla/config/manila-share.conf
.. code-block:: ini
[generic]
share_driver = manila.share.drivers.generic.GenericShareDriver
@ -323,6 +352,8 @@ Modify the file ``/etc/kolla/config/manila-share.conf`` and add the contents:
hitachi_hnas_evs_ip = <evs_ip>
hitachi_hnas_file_system_name = FS-Manila2
.. end
For more information about how to manage shares, see the
`Manage shares
<https://docs.openstack.org/manila/latest/user/create-and-manage-shares.html>`__.

View File

@ -1,8 +1,16 @@
.. _networking-guide:
==========================
===================
Networking in Kolla
===================
Kolla deploys Neutron by default as OpenStack networking component. This section
describes configuring and running Neutron extensions like LBaaS, Networking-SFC,
QoS, and so on.
Enabling Provider Networks
==========================
Provider networks allow to connect compute instances directly to physical
networks avoiding tunnels. This is necessary for example for some performance
critical applications. Only administrators of OpenStack can create such
@ -12,55 +20,53 @@ DVR mode networking. Normal tenant non-DVR networking does not need external
bridge on compute hosts and therefore operators don't need additional
dedicated network interface.
To enable provider networks modify the configuration
file ``/etc/kolla/globals.yml``:
To enable provider networks, modify the ``/etc/kolla/globals.yml`` file
as the following example shows:
::
.. code-block:: yaml
enable_neutron_provider_networks: "yes"
===========================
.. end
Enabling Neutron Extensions
===========================
Overview
========
Kolla deploys Neutron by default as OpenStack networking component. This guide
describes configuring and running Neutron extensions like LBaaS,
Networking-SFC, QoS, etc.
Networking-SFC
==============
~~~~~~~~~~~~~~
Preparation and deployment
--------------------------
Modify the configuration file ``/etc/kolla/globals.yml`` and change
the following:
Modify the ``/etc/kolla/globals.yml`` file as the following example shows:
::
.. code-block:: yaml
enable_neutron_sfc: "yes"
.. end
Verification
------------
For setting up a testbed environment and creating a port chain, please refer
to `networking-sfc documentation <https://docs.openstack.org/networking-sfc/latest/contributor/system_design_and_workflow.html>`_:
to `networking-sfc documentation
<https://docs.openstack.org/networking-sfc/latest/contributor/system_design_and_workflow.html>`__.
Neutron VPNaaS (VPN-as-a-Service)
=================================
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Preparation and deployment
--------------------------
Modify the configuration file ``/etc/kolla/globals.yml`` and change
the following:
Modify the ``/etc/kolla/globals.yml`` file as the following example shows:
::
.. code-block:: yaml
enable_neutron_vpnaas: "yes"
.. end
Verification
------------
@ -70,30 +76,31 @@ simple smoke test to verify the service is up and running.
On the network node(s), the ``neutron_vpnaas_agent`` should be up (image naming
and versioning may differ depending on deploy configuration):
::
.. code-block:: console
docker ps --filter name=neutron_vpnaas_agent
CONTAINER ID IMAGE
COMMAND CREATED STATUS PORTS
NAMES
97d25657d55e
operator:5000/kolla/oraclelinux-source-neutron-vpnaas-agent:4.0.0
"kolla_start" 44 minutes ago Up 44 minutes
neutron_vpnaas_agent
# docker ps --filter name=neutron_vpnaas_agent
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
97d25657d55e operator:5000/kolla/oraclelinux-source-neutron-vpnaas-agent:4.0.0 "kolla_start" 44 minutes ago Up 44 minutes neutron_vpnaas_agent
.. end
Kolla-Ansible includes a small script that can be used in tandem with
``tools/init-runonce`` to verify the VPN using two routers and two Nova VMs:
::
.. code-block:: console
tools/init-runonce
tools/init-vpn
.. end
Verify both VPN services are active:
::
.. code-block:: console
# neutron vpn-service-list
neutron vpn-service-list
+--------------------------------------+----------+--------------------------------------+--------+
| id | name | router_id | status |
+--------------------------------------+----------+--------------------------------------+--------+
@ -101,28 +108,29 @@ Verify both VPN services are active:
| edce15db-696f-46d8-9bad-03d087f1f682 | vpn_east | 058842e0-1d01-4230-af8d-0ba6d0da8b1f | ACTIVE |
+--------------------------------------+----------+--------------------------------------+--------+
.. end
Two VMs can now be booted, one on vpn_east, the other on vpn_west, and
encrypted ping packets observed being sent from one to the other.
For more information on this and VPNaaS in Neutron refer to the VPNaaS area on
the OpenStack wiki:
https://wiki.openstack.org/wiki/Neutron/VPNaaS/HowToInstall
https://wiki.openstack.org/wiki/Neutron/VPNaaS
For more information on this and VPNaaS in Neutron refer to the
`Neutron VPNaaS Testing <https://docs.openstack.org/neutron-vpnaas/latest/contributor/index.html#testing>`__
and the `OpenStack wiki <https://wiki.openstack.org/wiki/Neutron/VPNaaS>`_.
Networking-ODL
==============
~~~~~~~~~~~~~~
Preparation and deployment
--------------------------
Modify the configuration file ``/etc/kolla/globals.yml`` and enable
the following:
Modify the ``/etc/kolla/globals.yml`` file as the following example shows:
::
.. code-block:: yaml
enable_opendaylight: "yes"
.. end
Networking-ODL is an additional Neutron plugin that allows the OpenDaylight
SDN Controller to utilize its networking virtualization features.
For OpenDaylight to work, the Networking-ODL plugin has to be installed in
@ -130,8 +138,9 @@ the ``neutron-server`` container. In this case, one could use the
neutron-server-opendaylight container and the opendaylight container by
pulling from Kolla dockerhub or by building them locally.
OpenDaylight globals.yml configurable options with their defaults include:
::
OpenDaylight ``globals.yml`` configurable options with their defaults include:
.. code-block:: yaml
opendaylight_release: "0.6.1-Carbon"
opendaylight_mechanism_driver: "opendaylight_v2"
@ -144,11 +153,17 @@ OpenDaylight globals.yml configurable options with their defaults include:
opendaylight_features: "odl-mdsal-apidocs,odl-netvirt-openstack"
opendaylight_allowed_network_types: '"flat", "vlan", "vxlan"'
.. end
Clustered OpenDaylight Deploy
-----------------------------
High availability clustered OpenDaylight requires modifying the inventory file
and placing three or more hosts in the OpenDaylight or Networking groups.
Note: The OpenDaylight role will allow deploy of one or three plus hosts for
.. note::
The OpenDaylight role will allow deploy of one or three plus hosts for
OpenDaylight/Networking role.
Verification
@ -159,12 +174,10 @@ deployment will bring up an Opendaylight container in the list of running
containers on network/opendaylight node.
For the source code, please refer to the following link:
https://github.com/openstack/networking-odl
OVS with DPDK
=============
~~~~~~~~~~~~~
Introduction
------------
@ -205,30 +218,33 @@ it is advised to allocate them via the kernel command line instead to prevent
memory fragmentation. This can be achieved by adding the following to the grub
config and regenerating your grub file.
::
.. code-block:: none
default_hugepagesz=2M hugepagesz=2M hugepages=25000
.. end
As dpdk is a userspace networking library it requires userspace compatible
drivers to be able to control the physical interfaces on the platform.
dpdk technically support 3 kernel drivers igb_uio,uio_pci_generic and vfio_pci.
While it is technically possible to use all 3 only uio_pci_generic and vfio_pci
are recommended for use with kolla. igb_uio is BSD licenced and distributed
as part of the dpdk library. While it has some advantages over uio_pci_generic
loading the igb_uio module will taint the kernel and possibly invalidate
distro support. To successfully deploy ovs-dpdk, vfio_pci or uio_pci_generic
kernel module must be present on the platform. Most distros include vfio_pci
or uio_pci_generic as part of the default kernel though on some distros you
may need to install kernel-modules-extra or the distro equivalent prior to
running kolla-ansible deploy.
dpdk technically support 3 kernel drivers ``igb_uio``,``uio_pci_generic``, and
``vfio_pci``.
While it is technically possible to use all 3 only ``uio_pci_generic`` and
``vfio_pci`` are recommended for use with kolla. ``igb_uio`` is BSD licenced
and distributed as part of the dpdk library. While it has some advantages over
``uio_pci_generic`` loading the ``igb_uio`` module will taint the kernel and
possibly invalidate distro support. To successfully deploy ``ovs-dpdk``,
``vfio_pci`` or ``uio_pci_generic`` kernel module must be present on the platform.
Most distros include ``vfio_pci`` or ``uio_pci_generic`` as part of the default
kernel though on some distros you may need to install ``kernel-modules-extra`` or
the distro equivalent prior to running :command:`kolla-ansible deploy`.
Install
-------
Installation
------------
To enable ovs-dpdk add the following to /etc/kolla/globals.yml
To enable ovs-dpdk, add the following configuration to ``/etc/kolla/globals.yml``
file:
::
.. code-block:: yaml
ovs_datapath: "netdev"
enable_ovs_dpdk: yes
@ -236,6 +252,8 @@ To enable ovs-dpdk add the following to /etc/kolla/globals.yml
tunnel_interface: "dpdk_bridge"
neutron_bridge_name: "dpdk_bridge"
.. end
Unlike standard Open vSwitch deployments, the interface specified by
neutron_external_interface should have an ip address assigned.
The ip address assigned to neutron_external_interface will be moved to
@ -272,9 +290,8 @@ prior to upgrading.
On ubuntu network manager is required for tunnel networking.
This requirement will be removed in the future.
Neutron SRIOV
=============
~~~~~~~~~~~~~
Preparation and deployment
--------------------------
@ -283,17 +300,20 @@ SRIOV requires specific NIC and BIOS configuration and is not supported on all
platforms. Consult NIC and platform specific documentation for instructions
on enablement.
Modify the configuration file ``/etc/kolla/globals.yml``:
Modify the ``/etc/kolla/globals.yml`` file as the following example shows:
::
.. code-block:: yaml
enable_neutron_sriov: "yes"
Modify the file ``/etc/kolla/config/neutron/ml2_conf.ini``. Add ``sriovnicswitch``
to the mechanism drivers and add the provider networks for use by SRIOV. Both
flat and VLAN are configured with the same physical network name in this example:
.. end
::
Modify the ``/etc/kolla/config/neutron/ml2_conf.ini`` file and add ``sriovnicswitch``
to the ``mechanism_drivers``. Also, the provider networks used by SRIOV should be configured.
Both flat and VLAN are configured with the same physical network name in this example:
.. path /etc/kolla/config/neutron/ml2_conf.ini
.. code-block:: ini
[ml2]
mechanism_drivers = openvswitch,l2population,sriovnicswitch
@ -304,32 +324,50 @@ flat and VLAN are configured with the same physical network name in this example
[ml2_type_flat]
flat_networks = sriovtenant1
.. end
Modify the file ``/etc/kolla/config/nova.conf``. The Nova Scheduler service
on the control node requires the ``PciPassthroughFilter`` to be added to the
list of filters and the Nova Compute service(s) on the compute node(s) need
PCI device whitelisting:
Add ``PciPassthroughFilter`` to scheduler_default_filters
::
The ``PciPassthroughFilter``, which is required by Nova Scheduler service
on the Controller, should be added to ``scheduler_default_filters``
Modify the ``/etc/kolla/config/nova.conf`` file and add ``PciPassthroughFilter``
to ``scheduler_default_filters``. this filter is required by The Nova Scheduler
service on the controller node.
.. path /etc/kolla/config/nova.conf
.. code-block:: ini
[DEFAULT]
scheduler_default_filters = <existing filters>, PciPassthroughFilter
scheduler_available_filters = nova.scheduler.filters.all_filters
.. end
Edit the ``/etc/kolla/config/nova.conf`` file and add PCI device whitelisting.
this is needed by OpenStack Compute service(s) on the Compute.
.. path /etc/kolla/config/nova.conf
.. code-block:: ini
[pci]
passthrough_whitelist = [{"devname": "ens785f0", "physical_network": "sriovtenant1"}]
.. end
Modify the file ``/etc/kolla/config/neutron/sriov_agent.ini``. Add physical
network to interface mapping. Specific VFs can also be excluded here. Leave
blank to enable all VFs for the interface:
Modify the ``/etc/kolla/config/neutron/sriov_agent.ini`` file. Add physical
network to interface mapping. Specific VFs can also be excluded here. Leaving
blank means to enable all VFs for the interface:
::
.. path /etc/kolla/config/neutron/sriov_agent.ini
.. code-block:: ini
[sriov_nic]
physical_device_mappings = sriovtenant1:ens785f0
exclude_devices =
.. end
Run deployment.
Verification
@ -338,14 +376,13 @@ Verification
Check that VFs were created on the compute node(s). VFs will appear in the
output of both ``lspci`` and ``ip link show``. For example:
::
.. code-block:: console
lspci | grep net
# lspci | grep net
05:10.0 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
::
ip -d link show ens785f0
# ip -d link show ens785f0
4: ens785f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UP mode DEFAULT qlen 1000
link/ether 90:e2:ba:ba:fb:20 brd ff:ff:ff:ff:ff:ff promiscuity 1
openvswitch_slave addrgenmode eui64
@ -354,35 +391,42 @@ output of both ``lspci`` and ``ip link show``. For example:
vf 2 MAC fa:16:3e:92:cf:12, spoof checking on, link-state auto, trust off
vf 3 MAC fa:16:3e:00:a3:01, vlan 1000, spoof checking on, link-state auto, trust off
.. end
Verify the SRIOV Agent container is running on the compute node(s):
::
.. code-block:: console
docker ps --filter name=neutron_sriov_agent
# docker ps --filter name=neutron_sriov_agent
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b03a8f4c0b80 10.10.10.10:4000/registry/centos-source-neutron-sriov-agent:17.04.0 "kolla_start" 18 minutes ago Up 18 minutes neutron_sriov_agent
.. end
Verify the SRIOV Agent service is present and UP:
::
.. code-block:: console
# openstack network agent list
openstack network agent list
+--------------------------------------+--------------------+-------------+-------------------+-------+-------+---------------------------+
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
+--------------------------------------+--------------------+-------------+-------------------+-------+-------+---------------------------+
| 7c06bda9-7b87-487e-a645-cc6c289d9082 | NIC Switch agent | av09-18-wcp | None | :-) | UP | neutron-sriov-nic-agent |
+--------------------------------------+--------------------+-------------+-------------------+-------+-------+---------------------------+
.. end
Create a new provider network. Set ``provider-physical-network`` to the
physical network name that was configured in ``/etc/kolla/config/nova.conf``.
Set ``provider-network-type`` to the desired type. If using VLAN, ensure
``provider-segment`` is set to the correct VLAN ID. Type VLAN is used in this example:
``provider-segment`` is set to the correct VLAN ID. This example uses ``VLAN``
network type:
::
.. code-block:: console
openstack network create --project=admin \
# openstack network create --project=admin \
--provider-network-type=vlan \
--provider-physical-network=sriovtenant1 \
--provider-segment=1000 \
@ -390,24 +434,28 @@ Set ``provider-network-type`` to the desired type. If using VLAN, ensure
Create a subnet with a DHCP range for the provider network:
::
.. code-block:: console
openstack subnet create --network=sriovnet1 \
# openstack subnet create --network=sriovnet1 \
--subnet-range=11.0.0.0/24 \
--allocation-pool start=11.0.0.5,end=11.0.0.100 \
sriovnet1_sub1
Create a port on the provider network with vnic_type set to direct:
.. end
::
Create a port on the provider network with ``vnic_type`` set to ``direct``:
openstack port create --network sriovnet1 --vnic-type=direct sriovnet1-port1
.. code-block:: console
# openstack port create --network sriovnet1 --vnic-type=direct sriovnet1-port1
.. end
Start a new instance with the SRIOV port assigned:
::
.. code-block:: console
openstack server create --flavor flavor1 \
# openstack server create --flavor flavor1 \
--image fc-26 \
--nic port-id=`openstack port list | grep sriovnet1-port1 | awk '{print $2}'` \
vm1
@ -415,18 +463,19 @@ Start a new instance with the SRIOV port assigned:
Verify the instance boots with the SRIOV port. Verify VF assignment by running
dmesg on the compute node where the instance was placed.
::
.. code-block:: console
dmesg
# dmesg
[ 2896.849970] ixgbe 0000:05:00.0: setting MAC fa:16:3e:00:a3:01 on VF 3
[ 2896.850028] ixgbe 0000:05:00.0: Setting VLAN 1000, QOS 0x0 on VF 3
[ 2897.403367] vfio-pci 0000:05:10.4: enabling device (0000 -> 0002)
.. end
For more information see `OpenStack SRIOV documentation <https://docs.openstack.org/neutron/pike/admin/config-sriov.html>`_.
Nova SRIOV
==========
~~~~~~~~~~
Preparation and deployment
--------------------------
@ -447,7 +496,8 @@ Compute service on the compute node also require the ``alias`` option under the
``[pci]`` section. The alias can be configured as 'type-VF' to pass VFs or 'type-PF'
to pass the PF. Type-VF is shown in this example:
::
.. path /etc/kolla/config/nova.conf
.. code-block:: ini
[DEFAULT]
scheduler_default_filters = <existing filters>, PciPassthroughFilter
@ -457,6 +507,8 @@ to pass the PF. Type-VF is shown in this example:
passthrough_whitelist = [{"vendor_id": "8086", "product_id": "10fb"}]
alias = [{"vendor_id":"8086", "product_id":"10ed", "device_type":"type-VF", "name":"vf1"}]
.. end
Run deployment.
Verification
@ -465,16 +517,19 @@ Verification
Create (or use an existing) flavor, and then configure it to request one PCI device
from the PCI alias:
::
.. code-block:: console
openstack flavor set sriov-flavor --property "pci_passthrough:alias"="vf1:1"
# openstack flavor set sriov-flavor --property "pci_passthrough:alias"="vf1:1"
.. end
Start a new instance using the flavor:
::
.. code-block:: console
openstack server create --flavor sriov-flavor --image fc-26 vm2
# openstack server create --flavor sriov-flavor --image fc-26 vm2
.. end
Verify VF devices were created and the instance starts successfully as in
the Neutron SRIOV case.

View File

@ -5,33 +5,35 @@ Nova Fake Driver
================
One common question from OpenStack operators is that "how does the control
plane (e.g., database, messaging queue, nova-scheduler ) scales?". To answer
plane (for example, database, messaging queue, nova-scheduler ) scales?". To answer
this question, operators setup Rally to drive workload to the OpenStack cloud.
However, without a large number of nova-compute nodes, it becomes difficult to
exercise the control performance.
Given the built-in feature of Docker container, Kolla enables standing up many
of nova-compute nodes with nova fake driver on a single host. For example,
of Compute nodes with nova fake driver on a single host. For example,
we can create 100 nova-compute containers on a real host to simulate the
100-hypervisor workload to the nova-conductor and the messaging queue.
100-hypervisor workload to the ``nova-conductor`` and the messaging queue.
Use nova-fake driver
====================
~~~~~~~~~~~~~~~~~~~~
Nova fake driver can not work with all-in-one deployment. This is because the
fake neutron-openvswitch-agent for the fake nova-compute container conflicts
with neutron-openvswitch-agent on the compute nodes. Therefore, in the
inventory the network node must be different than the compute node.
fake ``neutron-openvswitch-agent`` for the fake ``nova-compute`` container conflicts
with ``neutron-openvswitch-agent`` on the Compute nodes. Therefore, in the
inventory the network node must be different than the Compute node.
By default, Kolla uses libvirt driver on the compute node. To use nova-fake
By default, Kolla uses libvirt driver on the Compute node. To use nova-fake
driver, edit the following parameters in ``/etc/kolla/globals.yml`` or in
the command line options.
::
.. code-block:: yaml
enable_nova_fake: "yes"
num_nova_fake_per_node: 5
Each compute node will run 5 nova-compute containers and 5
neutron-plugin-agent containers. When booting instance, there will be no real
instances created. But *nova list* shows the fake instances.
.. end
Each Compute node will run 5 ``nova-compute`` containers and 5
``neutron-plugin-agent`` containers. When booting instance, there will be no real
instances created. But :command:`nova list` shows the fake instances.

View File

@ -5,7 +5,8 @@ OSprofiler in Kolla
===================
Overview
========
~~~~~~~~
OSProfiler provides a tiny but powerful library that is used by most
(soon to be all) OpenStack projects and their corresponding python clients
as well as the Openstack client.
@ -17,13 +18,15 @@ to build a tree of calls which can be quite handy for a variety of reasons
Configuration on Kolla deployment
---------------------------------
Enable OSprofiler in ``/etc/kolla/globals.yml``
Enable ``OSprofiler`` in ``/etc/kolla/globals.yml`` file:
.. code-block:: console
.. code-block:: yaml
enable_osprofiler: "yes"
enable_elasticsearch: "yes"
.. end
Verify operation
----------------
@ -32,18 +35,15 @@ Retrieve ``osprofiler_secret`` key present at ``/etc/kolla/passwords.yml``.
Profiler UUIDs can be created executing OpenStack clients (Nova, Glance,
Cinder, Heat, Keystone) with ``--profile`` option or using the official
Openstack client with ``--os-profile``. In example to get the OSprofiler trace
UUID for ``openstack server create``.
UUID for :command:`openstack server create` command.
.. code-block:: console
$ openstack --os-profile <OSPROFILER_SECRET> \
server create \
--image cirros \
--flavor m1.tiny \
--key-name mykey \
--nic net-id=${NETWORK_ID} \
demo
$ openstack --os-profile <OSPROFILER_SECRET> server create \
--image cirros --flavor m1.tiny --key-name mykey \
--nic net-id=${NETWORK_ID} demo
.. end
The previous command will output the command to retrieve OSprofiler trace.
@ -52,6 +52,8 @@ The previous command will output the command to retrieve OSprofiler trace.
$ osprofiler trace show --html <TRACE_ID> --connection-string \
elasticsearch://<api_interface_address>:9200
.. end
For more information about how OSprofiler works, see
`OSProfiler Cross-project profiling library
<https://docs.openstack.org/osprofiler/latest/>`__.