Add job to test plans against the template

* template modified a bit to fit in more beautiful way
* tests written
* test environment py27 created

Change-Id: I9ad9483609ff63e44d46900004a3266620dc0078
This commit is contained in:
Dina Belova 2016-01-15 20:08:09 +03:00
parent b1932a9245
commit c72c57f040
11 changed files with 523 additions and 271 deletions

4
.testr.conf Normal file
View File

@ -0,0 +1,4 @@
[DEFAULT]
test_command=OS_STDOUT_CAPTURE=1 OS_STDERR_CAPTURE=1 OS_TEST_TIMEOUT=60 ${PYTHON:-python} -m subunit.run discover -t ./ . $LISTOPT $IDOPTION
test_id_option=--load-list $IDFILE
test_list_option=--list

View File

@ -9,5 +9,5 @@ Test Plans
.. toctree:: .. toctree::
:maxdepth: 2 :maxdepth: 2
mq/index mq/plan
provisioning/main provisioning/plan

View File

@ -1,8 +0,0 @@
Message Queue Test Plan
=======================
.. toctree::
:maxdepth: 2
setup
test_cases

View File

@ -1,5 +1,20 @@
Test Setup =======================
---------- Message Queue Test Plan
=======================
:status: ready
:version: 0
:Abstract:
This document describes a test plan for quantifying the performance of
message queues usually used as a message bug between OpenStack services.
Test Plan
=========
Test Environment
----------------
This section describes the setup for message queue testing. It can be either This section describes the setup for message queue testing. It can be either
a single (all-in-one) or a multi-node installation. a single (all-in-one) or a multi-node installation.
@ -15,14 +30,17 @@ A basic multi-node setup with RabbitMQ or ActiveMQ comprises 5 physical nodes:
is typical for OpenStack control plane services. is typical for OpenStack control plane services.
* Three nodes are allocated for the MQ cluster. * Three nodes are allocated for the MQ cluster.
When using ZeroMQ, the basic multi-node setup can be reduced to two physical nodes. When using ZeroMQ, the basic multi-node setup can be reduced to two physical
nodes.
* One node for a compute node as above. * One node for a compute node as above.
* One node for a controller node. This node also acts as a Redis host * One node for a controller node. This node also acts as a Redis host for
for match making purposes. match making purposes.
Preparation
^^^^^^^^^^^
RabbitMQ Installation and Configuration **RabbitMQ Installation and Configuration**
---------------------------------------
* Install RabbitMQ server package: * Install RabbitMQ server package:
``sudo apt-get install rabbitmq-server`` ``sudo apt-get install rabbitmq-server``
@ -51,8 +69,7 @@ RabbitMQ Installation and Configuration
``sudo rabbitmqctl set_permissions stackrabbit ".*" ".*" ".*"`` ``sudo rabbitmqctl set_permissions stackrabbit ".*" ".*" ".*"``
ActiveMQ Installation and Configuration **ActiveMQ Installation and Configuration**
---------------------------------------
This section describes installation and configuration steps for an ActiveMQ This section describes installation and configuration steps for an ActiveMQ
message queue implementation. ActiveMQ is based on Java technologies so it message queue implementation. ActiveMQ is based on Java technologies so it
@ -79,8 +96,8 @@ performed for an ActiveMQ installation:
.. note:: .. note::
Here 10.4.1.x are the IP addresses of the ZooKeeper nodes where ZK is Here 10.4.1.x are the IP addresses of the ZooKeeper nodes where ZK is
installed. ZK will be run in cluster mode with majority voting, so at least 3 nodes installed. ZK will be run in cluster mode with majority voting, so at least
are required. 3 nodes are required.
.. code-block:: none .. code-block:: none
@ -96,8 +113,8 @@ performed for an ActiveMQ installation:
* create dataDir and dataLogDir directories * create dataDir and dataLogDir directories
* for each MQ node create a myid file in dataDir with the id of the * for each MQ node create a myid file in dataDir with the id of the
server and nothing else. For node-1 the file will contain one line with 1, server and nothing else. For node-1 the file will contain one line
node-2 with 2, and node-3 with 3. with 1, node-2 with 2, and node-3 with 3.
* start ZooKeeper (on each node): \textbf{./zkServer.sh start} * start ZooKeeper (on each node): \textbf{./zkServer.sh start}
* check ZK status with: \textbf{./zkServer.sh status} * check ZK status with: \textbf{./zkServer.sh status}
* Configure ActiveMQ (apache-activemq-5.12.0/conf/activemq.xml file - set * Configure ActiveMQ (apache-activemq-5.12.0/conf/activemq.xml file - set
@ -134,27 +151,28 @@ After ActiveMQ is installed and configured it can be started with the command:
:command:./activemq start or ``./activemq console`` for a foreground process. :command:./activemq start or ``./activemq console`` for a foreground process.
Oslo.messaging ActiveMQ Driver **Oslo.messaging ActiveMQ Driver**
------------------------------
All OpenStack changes (in the oslo.messaging library) to support ActiveMQ are
already merged to the upstream repository. The relevant changes can be found in
the amqp10-driver-implementation topic.
All OpenStack changes (in the oslo.messaging library) to support ActiveMQ are already
merged to the upstream repository. The relevant changes can be found in the
amqp10-driver-implementation topic.
To run ActiveMQ even on the most basic all-in-one topology deployment the To run ActiveMQ even on the most basic all-in-one topology deployment the
following requirements need to be satisfied: following requirements need to be satisfied:
* Java JRE must be installed in the system. The Java version can be checked with the * Java JRE must be installed in the system. The Java version can be checked
command ``java -version``. If java is not installed an error message will with the command ``java -version``. If java is not installed an error
appear. Java can be installed with the following command: message will appear. Java can be installed with the following command:
``sudo apt-get install default-jre`` ``sudo apt-get install default-jre``
* ActiveMQ binaries should be installed in the system. See * ActiveMQ binaries should be installed in the system. See
http://activemq.apache.org/getting-started.html for installation instructions. http://activemq.apache.org/getting-started.html for installation
The latest stable version is currently instructions. The latest stable version is currently
http://apache-mirror.rbc.ru/pub/apache/activemq/5.12.0/apache-activemq-5.12.0-bin.tar.gz. http://apache-mirror.rbc.ru/pub/apache/activemq/5.12.0/apache-activemq-5.12.0-bin.tar.gz.
* To use the OpenStack oslo.messaging amqp 1.0 driver, the following Python libraries * To use the OpenStack oslo.messaging amqp 1.0 driver, the following Python
need to be installed: libraries need to be installed:
``pip install "pyngus$>=$1.0.0,$<$2.0.0"`` ``pip install "pyngus$>=$1.0.0,$<$2.0.0"``
``pip install python-qpid-proton`` ``pip install python-qpid-proton``
@ -162,19 +180,20 @@ following requirements need to be satisfied:
``rpc_backend = rabbit`` need to be modified to replace this line with ``rpc_backend = rabbit`` need to be modified to replace this line with
``rpc_backend = amqp``, and then all the services need to be restarted. ``rpc_backend = amqp``, and then all the services need to be restarted.
ZeroMQ Installation **ZeroMQ Installation**
-------------------
This section describes installation steps for ZeroMQ. ZeroMQ (also ZMQ or 0MQ) This section describes installation steps for ZeroMQ. ZeroMQ (also ZMQ or 0MQ)
is an embeddable networking library but acts like a concurrency framework. is an embeddable networking library but acts like a concurrency framework.
Unlike other AMQP-based drivers, such as RabbitMQ, ZeroMQ doesnt have any central brokers in Unlike other AMQP-based drivers, such as RabbitMQ, ZeroMQ doesnt have any
oslo.messaging. Instead, each host (running OpenStack services) is both a ZeroMQ client and central brokers in oslo.messaging. Instead, each host (running OpenStack
a server. As a result, each host needs to listen to a certain TCP port for incoming connections services) is both a ZeroMQ client and a server. As a result, each host needs to
and directly connect to other hosts simultaneously. listen to a certain TCP port for incoming connections and directly connect to
other hosts simultaneously.
To set up ZeroMQ, only one step needs to be performed. To set up ZeroMQ, only one step needs to be performed.
* Install python bindings for ZeroMQ. All necessary packages will be installed as dependencies: * Install python bindings for ZeroMQ. All necessary packages will be
installed as dependencies:
``sudo apt-get install python-zmq`` ``sudo apt-get install python-zmq``
.. note:: .. note::
@ -191,11 +210,12 @@ To set up ZeroMQ, only one step needs to be performed.
Depends: libc6 Depends: libc6
Depends: libzmq3 Depends: libzmq3
Oslo.messaging ZeroMQ Driver **Oslo.messaging ZeroMQ Driver**
----------------------------
All OpenStack changes (in the oslo.messaging library) to support ZeroMQ are already All OpenStack changes (in the oslo.messaging library) to support ZeroMQ are
merged to the upstream repository. You can find the relevant changes in the already merged to the upstream repository. You can find the relevant changes
zmq-patterns-usage topic. in the zmq-patterns-usage topic.
To run ZeroMQ on the most basic all-in-one topology deployment the To run ZeroMQ on the most basic all-in-one topology deployment the
following requirements need to be satisfied: following requirements need to be satisfied:
@ -206,11 +226,12 @@ following requirements need to be satisfied:
.. note:: .. note::
The following changes need to be applied to all OpenStack project configuration files. The following changes need to be applied to all OpenStack project
configuration files.
* To enable the driver, in the section [DEFAULT] of each configuration file, the rpc_backend * To enable the driver, in the section [DEFAULT] of each configuration file,
flag must be set to zmq and the rpc_zmq_host flag must be set to the hostname the rpc_backend flag must be set to zmq and the rpc_zmq_host flag
of the node. must be set to the hostname of the node.
.. code-block:: none .. code-block:: none
@ -231,19 +252,20 @@ following requirements need to be satisfied:
port = 6379 port = 6379
password = None password = None
Running ZeroMQ on a multi-node setup **Running ZeroMQ on a multi-node setup**
------------------------------------
The process of setting up oslo.messaging with ZeroMQ on a multi-node environment is very similar The process of setting up oslo.messaging with ZeroMQ on a multi-node
to the all-in-one installation. environment is very similar to the all-in-one installation.
* On each node ``rpc_zmq_host`` should be set to its FQDN. * On each node ``rpc_zmq_host`` should be set to its FQDN.
* Redis-server should be up and running on a controller node or a separate host. * Redis-server should be up and running on a controller node or a separate
Redis can be used with master-slave replication enabled, but currently the oslo.messaging ZeroMQ driver host. Redis can be used with master-slave replication enabled, but
does not support Redis Sentinel, so it is not yet possible to achieve high availability, automatic failover, currently the oslo.messaging ZeroMQ driver does not support Redis Sentinel,
so it is not yet possible to achieve high availability, automatic failover,
and fault tolerance. and fault tolerance.
The ``host`` parameter in section ``[matchmaker_redis]`` should be set to the IP address of a host which runs The ``host`` parameter in section ``[matchmaker_redis]`` should be set to
a master Redis instance, e.g. the IP address of a host which runs a master Redis instance, e.g.
.. code-block:: none .. code-block:: none
@ -251,3 +273,119 @@ to the all-in-one installation.
host = 10.0.0.3 host = 10.0.0.3
port = 6379 port = 6379
password = None password = None
Environment description
^^^^^^^^^^^^^^^^^^^^^^^
Test results must include used environment description. This includes:
* Hardware used (servers, switches, storage, etc.)
* Network scheme
* Messaging bus specification and OpenStack version deployed (if any).
Test Case 1: Message Queue Throughput Test
------------------------------------------
Description
^^^^^^^^^^^
This test measures the aggregate throughput of a MQ layer by using the
oslo.messaging simulator tool. Either RabbitMQ, ActiveMQ, or ZeroMQ can be used
as the MQ layer. Throughput is calculated as the sum over the MQ clients of the
throughput for each client. For each test the number of clients/threads is
configured to one of the specific values defined in the test case parameters
section. The full set of tests will cover all the "Threads count" values shown,
plus additional values as needed to quantify the dependence of MQ throughput on
load, and to find the maximum throughput.
Parameters
^^^^^^^^^^
======================= ===========
Parameter name Value
======================= ===========
oslo.messaging version 2.5.0
simulator.py version 1.0
Threads count 50, 70, 100
======================= ===========
List of performance metrics
^^^^^^^^^^^^^^^^^^^^^^^^^^^
======== ========== ================ ===================================
Priority Value Measurment Units Description
======== ========== ================ ===================================
1 Throughput msg/sec Directly measured by simulator tool
======== ========== ================ ===================================
Result Type
^^^^^^^^^^^
================ ======================= =========================
Result type Measurement Units Description
================ ======================= =========================
Throughput Value msg/sec Table of numerical values
Throughput Graph msg/sec vs # of threads Graph
================ ======================= =========================
Additional Measurements
^^^^^^^^^^^^^^^^^^^^^^^
=========== ======= =============================
Measurement Units Description
=========== ======= =============================
Variance msg/sec Throughput variance over time
=========== ======= =============================
Test Case 2: OMGBenchmark Rally test
------------------------------------
Description
^^^^^^^^^^^
OMGBenchmark is a rally plugin for benchmarking oslo.messaging.
The plugin and installation instructions are available on github:
https://github.com/Yulya/omgbenchmark
Parameters
^^^^^^^^^^
================================= =============== ===============
Parameter name Rally name Value
================================= =============== ===============
oslo.messaging version 2.5.0
Number of iterations times 50, 100, 500
Threads count concurrency 40, 70, 100
Number of RPC servers num_servers 10
Number of RPC clients num_clients 10
Number of topics num_topics 5
Number of messages per iteration num_messages 100
Message size msg_length_file 900-12000 bytes
================================= =============== ===============
List of performance metrics
^^^^^^^^^^^^^^^^^^^^^^^^^^^
======= ================= ==========================================
Name Measurement Units Description
======= ================= ==========================================
min sec Minimal execution time of one iteration
median sec Median execution time
90%ile sec 90th percentile execution time
95%ile sec 95th percentile execution time
max sec Maximal execution time of one iteration
avg sec Average execution time
success none Number of successfully finished iterations
count none Number of executed iterations
======= ================= ==========================================
Result Type
^^^^^^^^^^^
================= ======================= =========================
Result type Measurement Units Description
================= ======================= =========================
Throughput Graph msg size vs median Graph
Concurrency Graph concurrency vs median Graph
================= ======================= =========================

View File

@ -1,99 +0,0 @@
Test Cases
==========
Test Case 1: Message Queue Throughput Test
------------------------------------------
**Description**
This test measures the aggregate throughput of a MQ layer by using the oslo.messaging
simulator tool. Either RabbitMQ, ActiveMQ, or ZeroMQ can be used as the MQ layer.
Throughput is calculated as the sum
over the MQ clients of the throughput for each client. For each test the number of
clients/threads is configured to one of the specific values defined in the test case
parameters section. The full set of tests will cover all the "Threads count" values shown,
plus additional values as needed to quantify the dependence of MQ throughput on load, and
to find the maximum throughput.
**Parameters**
======================= =====
Parameter name Value
======================= =====
oslo.messaging version 2.5.0
simulator.py version 1.0
Threads count 50, 70, 100
======================= =====
**Measurements**
========== ================ ===========
Value Measurment Units Description
========== ================ ===========
Throughput msg/sec Directly measured by simulator tool
========== ================ ===========
**Result Type**
================ ======================= =========================
Result type Measurement Units Description
================ ======================= =========================
Throughput Value msg/sec Table of numerical values
Throughput Graph msg/sec vs # of threads Graph
================ ======================= =========================
**Additional Measurements**
=========== ======= =============================
Measurement Units Description
=========== ======= =============================
Variance msg/sec Throughput variance over time
=========== ======= =============================
Test Case 2: OMGBenchmark Rally test
------------------------------------
**Description**
OMGBenchmark is a rally plugin for benchmarking oslo.messaging.
The plugin and installation instructions are available on github:
https://github.com/Yulya/omgbenchmark
**Parameters**
================================= =============== =====
Parameter name Rally name Value
================================= =============== =====
oslo.messaging version 2.5.0
Number of iterations times 50, 100, 500
Threads count concurrency 40, 70, 100
Number of RPC servers num_servers 10
Number of RPC clients num_clients 10
Number of topics num_topics 5
Number of messages per iteration num_messages 100
Message size msg_length_file 900-12000 bytes
================================= =============== =====
**Measurements**
======= ================= ==========================================
Name Measurement Units Description
======= ================= ==========================================
min sec Minimal execution time of one iteration
median sec Median execution time
90%ile sec 90th percentile execution time
95%ile sec 95th percentile execution time
max sec Maximal execution time of one iteration
avg sec Average execution time
success none Number of successfully finished iterations
count none Number of executed iterations
======= ================= ==========================================
**Result Type**
================= ======================= =========================
Result type Measurement Units Description
================= ======================= =========================
Throughput Graph msg size vs median Graph
Concurrency Graph concurrency vs median Graph
================= ======================= =========================

View File

@ -10,20 +10,20 @@ Measuring performance of provisioning systems
:Abstract: :Abstract:
This document describes a test plan for quantifying the performance of This document describes a test plan for quantifying the performance of
provisioning systems as a function of the number of nodes to be provisioned. The provisioning systems as a function of the number of nodes to be provisioned.
plan includes the collection of several resource utilization metrics, which will The plan includes the collection of several resource utilization metrics,
be used to analyze and understand the overall performance of each system. In which will be used to analyze and understand the overall performance of each
particular, resource bottlenecks will either be fixed, or best practices system. In particular, resource bottlenecks will either be fixed, or best
developed for system configuration and hardware requirements. practices developed for system configuration and hardware requirements.
:Conventions: :Conventions:
- **Provisioning:** is the entire process of installing and configuring an - **Provisioning:** is the entire process of installing and configuring an
operating system. operating system.
- **Provisioning system:** is a service or a set of services which enables the - **Provisioning system:** is a service or a set of services which enables
installation of an operating system and performs basic operations such as the installation of an operating system and performs basic operations such
configuring network interfaces and partitioning disks. A preliminary as configuring network interfaces and partitioning disks. A preliminary
`list of provisioning systems`_ can be found below in `Applications`_. `list of provisioning systems`_ can be found below in `Applications`_.
The provisioning system The provisioning system
can include configuration management systems like Puppet or Chef, but can include configuration management systems like Puppet or Chef, but
@ -37,70 +37,38 @@ Measuring performance of provisioning systems
- **Nodes:** are servers which will be provisioned. - **Nodes:** are servers which will be provisioned.
List of performance metrics
---------------------------
The table below shows the list of test metrics to be collected. The priority
is the relative ranking of the importance of each metric in evaluating the
performance of the system.
.. table:: List of performance metrics
+--------+------------------------+------------------------------------------+
|Priority| Parameter | Description |
+========+========================+==========================================+
| | | | The elapsed time to provision all |
| 1 |PROVISIONING_TIME(NODES)| | nodes, as a function of the numbers of |
| | | | nodes |
+--------+------------------------+------------------------------------------+
| | | | Incoming network bandwidth usage as a |
| 2 |INGRESS_NET(NODES) | | function of the number of nodes. |
| | | | Average during provisioning on the host|
| | | | where the provisioning system is |
| | | | installed. |
+--------+------------------------+------------------------------------------+
| | | | Outgoing network bandwidth usage as a |
| 2 | EGRESS_NET(NODES) | | function of the number of nodes. |
| | | | Average during provisioning on the host|
| | | | where the provisioning system is |
| | | | installed. |
+--------+------------------------+------------------------------------------+
| | | | CPU utilization as a function of the |
| 3 | CPU(NODES) | | number of nodes. Average during |
| | | | provisioning on the host where the |
| | | | provisioning system is installed. |
+--------+------------------------+------------------------------------------+
| | | | Active memory usage as a function of |
| 3 | RAM(NODES) | | the number of nodes. Average during |
| | | | provisioning on the host where the |
| | | | provisioning system is installed. |
+--------+------------------------+------------------------------------------+
| | | | Storage read IO bandwidth as a |
| 3 | WRITE_IO(NODES) | | function of the number of nodes. |
| | | | Average during provisioning on the host|
| | | | where the provisioning system is |
| | | | installed. |
+--------+------------------------+------------------------------------------+
| | | | Storage write IO bandwidth as a |
| 3 | READ_IO(NODES) | | function of the number of nodes. |
| | | | Average during provisioning on the host|
| | | | where the provisioning system is |
| | | | installed. |
+--------+------------------------+------------------------------------------+
Test Plan Test Plan
--------- =========
The above performance metrics will be measured for various number This test plan aims to identify the best provisioning solution for cloud
of provisioned nodes. The result will be a table that shows the deployment, using specified list of performance measurements and tools.
dependence of these metrics on the number of nodes.
Test Environment
----------------
Preparation
^^^^^^^^^^^
1.
The following package needs to be installed on the provisioning system
servers to collect performance metrics.
.. table:: Software to be installed
+--------------+---------+-----------------------------------+
| package name | version | source |
+==============+=========+===================================+
| `dstat`_ | 0.7.2 | Ubuntu trusty universe repository |
+--------------+---------+-----------------------------------+
Environment description Environment description
^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^
Test results MUST include a description of the environment used. The following items
should be included:
- **Hardware configuration of each server.** If virtual machines are used then both Test results MUST include a description of the environment used. The following
physical and virtual hardware should be fully documented. items should be included:
- **Hardware configuration of each server.** If virtual machines are used then
both physical and virtual hardware should be fully documented.
An example format is given below: An example format is given below:
.. table:: Description of server hardware .. table:: Description of server hardware
@ -141,10 +109,10 @@ should be included:
| |size | | | | |size | | |
+-------+----------------+-------+-------+ +-------+----------------+-------+-------+
- **Configuration of hardware network switches.** The configuration file from the - **Configuration of hardware network switches.** The configuration file from
switch can be downloaded and attached. the switch can be downloaded and attached.
- **Configuration of virtual machines and virtual networks (if they are used).** - **Configuration of virtual machines and virtual networks (if used).**
The configuration files can be attached, along with the mapping of virtual The configuration files can be attached, along with the mapping of virtual
machines to host machines. machines to host machines.
@ -166,22 +134,78 @@ should be included:
affect the amount of work to be performed by the provisioning system affect the amount of work to be performed by the provisioning system
and thus its performance. and thus its performance.
Preparation Test Case
---------
Description
^^^^^^^^^^^ ^^^^^^^^^^^
1.
The following package needs to be installed on the provisioning system
servers to collect performance metrics.
.. table:: Software to be installed This specific test plan contains only one test case, that needs to be run
step by step on the environments differing list of parameters below.
+--------------+---------+-----------------------------------+ Parameters
| package name | version | source | ^^^^^^^^^^
+==============+=========+===================================+
| `dstat`_ | 0.7.2 | Ubuntu trusty universe repository | =============== =========================================
+--------------+---------+-----------------------------------+ Parameter name Value
=============== =========================================
number of nodes 10, 20, 40, 80, 160, 320, 640, 1280, 2000
=============== =========================================
List of performance metrics
^^^^^^^^^^^^^^^^^^^^^^^^^^^
The table below shows the list of test metrics to be collected. The priority
is the relative ranking of the importance of each metric in evaluating the
performance of the system.
.. table:: List of performance metrics
+--------+-------------------+-------------------+------------------------------------------+
|Priority| Value | Measurement Units | Description |
+========+===================+===================+==========================================+
| | | || The elapsed time to provision all |
| 1 | PROVISIONING_TIME | seconds || nodes, as a function of the numbers of |
| | | || nodes |
+--------+-------------------+-------------------+------------------------------------------+
| | | || Incoming network bandwidth usage as a |
| 2 | INGRESS_NET | Gbit/s || function of the number of nodes. |
| | | || Average during provisioning on the host |
| | | || where the provisioning system is |
| | | || installed. |
+--------+-------------------+-------------------+------------------------------------------+
| | | || Outgoing network bandwidth usage as a |
| 2 | EGRESS_NET | Gbit/s || function of the number of nodes. |
| | | || Average during provisioning on the host |
| | | || where the provisioning system is |
| | | || installed. |
+--------+-------------------+-------------------+------------------------------------------+
| | | || CPU utilization as a function of the |
| 3 | CPU | percentage || number of nodes. Average during |
| | | || provisioning on the host where the |
| | | || provisioning system is installed. |
+--------+-------------------+-------------------+------------------------------------------+
| | | || Active memory usage as a function of |
| 3 | RAM | GB || the number of nodes. Average during |
| | | || provisioning on the host where the |
| | | || provisioning system is installed. |
+--------+-------------------+-------------------+------------------------------------------+
| | | || Storage read IO bandwidth as a |
| 3 | WRITE_IO | operations/second || function of the number of nodes. |
| | | || Average during provisioning on the host |
| | | || where the provisioning system is |
| | | || installed. |
+--------+-------------------+-------------------+------------------------------------------+
| | | || Storage write IO bandwidth as a |
| 3 | READ_IO | operations/second || function of the number of nodes. |
| | | || Average during provisioning on the host |
| | | || where the provisioning system is |
| | | || installed. |
+--------+-------------------+-------------------+------------------------------------------+
Measuring performance values Measuring performance values
^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The script The script
`Full script for collecting performance metrics`_ `Full script for collecting performance metrics`_
can be used for the first five of the following steps. can be used for the first five of the following steps.
@ -197,9 +221,8 @@ can be used for the first five of the following steps.
2. 2.
Start the provisioning process for the first node and record the wall time. Start the provisioning process for the first node and record the wall time.
3. 3.
Wait until the provisioning process has finished (when all nodes are reachable Wait until the provisioning process has finished (when all nodes are
via ssh) reachable via ssh) and record the wall time.
and record the wall time.
4. 4.
Stop the dstat program. Stop the dstat program.
5. 5.
@ -233,32 +256,20 @@ can be used for the first five of the following steps.
These values will be graphed and maximum values reported. These values will be graphed and maximum values reported.
6.
Repeat steps 1-5 for provisioning at the same time the following number of
nodes:
* 10 nodes
* 20 nodes
* 40 nodes
* 80 nodes
* 160 nodes
* 320 nodes
* 640 nodes
* 1280 nodes
* 2000 nodes
Additional tests will be performed if some anomalous behaviour is found. Additional tests will be performed if some anomalous behaviour is found.
These may require the collection of additional performance metrics. These may require the collection of additional performance metrics.
7. 6.
The result of this part of test will be: The result of this part of test will be:
* to provide the following graphs, one for each number of provisioned nodes: * to provide the following graphs, one for each number of provisioned nodes:
#) Three dependencies on one graph. #) Three dependencies on one graph.
* INGRESS_NET(TIME) Dependence on time of incoming network bandwidth usage. * INGRESS_NET(TIME) Dependence on time of incoming network bandwidth
* EGRESS_NET(TIME) Dependence on time of outgoing network bandwidth usage. usage.
* EGRESS_NET(TIME) Dependence on time of outgoing network bandwidth
usage.
* ALL_NET(TIME) Dependence on time of total network bandwidth usage. * ALL_NET(TIME) Dependence on time of total network bandwidth usage.
#) One dependence on one graph. #) One dependence on one graph.
@ -313,10 +324,10 @@ nodes.
+-------+--------------+---------+---------+---------+---------+ +-------+--------------+---------+---------+---------+---------+
Applications Applications
------------ ============
list of provisioning systems List of provisioning systems
^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ----------------------------
.. table:: list of provisioning systems .. table:: list of provisioning systems
@ -333,7 +344,7 @@ list of provisioning systems
+-----------------------------+---------+ +-----------------------------+---------+
Full script for collecting performance metrics Full script for collecting performance metrics
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ==============================================
.. literalinclude:: measure.sh .. literalinclude:: measure.sh
:language: bash :language: bash

View File

@ -2,8 +2,6 @@
Example Test Plan - The title of your plan Example Test Plan - The title of your plan
========================================== ==========================================
Please include the following information to this primary section:
:status: test plan status - either **draft** or **ready** :status: test plan status - either **draft** or **ready**
:version: test plan version :version: test plan version
@ -33,13 +31,15 @@ using sections, similar to the written below.
Test Environment Test Environment
---------------- ----------------
**Preparation** Preparation
^^^^^^^^^^^
Please specify here what needs to be done with the environment to run Please specify here what needs to be done with the environment to run
this test plan. This can include specific tools installation, this test plan. This can include specific tools installation,
specific OpenStack deployment, etc. specific OpenStack deployment, etc.
**Environment description** Environment description
^^^^^^^^^^^^^^^^^^^^^^^
Please define here used environment. You can use the scheme below for this Please define here used environment. You can use the scheme below for this
purpose or modify it due to your needs: purpose or modify it due to your needs:
@ -54,17 +54,20 @@ purpose or modify it due to your needs:
Test Case 1: Something very interesting #1 Test Case 1: Something very interesting #1
------------------------------------------ ------------------------------------------
**Description** Description
^^^^^^^^^^^
Define test case #1. Every test case can contain at least the sections, defined Define test case #1. Every test case can contain at least the sections, defined
below. below.
**Parameters** Parameters
^^^^^^^^^^
Optional section. Can be used if there are multiple test cases differing in Optional section. Can be used if there are multiple test cases differing in
some input parameters - if so, these parameters need to be listed here. some input parameters - if so, these parameters need to be listed here.
**List of performance metrics** List of performance metrics
^^^^^^^^^^^^^^^^^^^^^^^^^^^
Mandatory section. Defines what measurements are in fact done during the test. Mandatory section. Defines what measurements are in fact done during the test.
To be a good citizen in case of multiple metrics collection, it will be nice to To be a good citizen in case of multiple metrics collection, it will be nice to
@ -78,7 +81,8 @@ Priority Value Measurement Units Description
3 - not that much important What's measured <units> <description> 3 - not that much important What's measured <units> <description>
=========================== =============== ================= ============= =========================== =============== ================= =============
**Some additional section** Some additional section
^^^^^^^^^^^^^^^^^^^^^^^
Depending on the test case nature, something else may need to be defined. Depending on the test case nature, something else may need to be defined.
If so, additional sections with free form titles should be added. If so, additional sections with free form titles should be added.

View File

@ -2,3 +2,5 @@ rst2pdf
sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2 sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2
sphinxcontrib-httpdomain sphinxcontrib-httpdomain
sphinx_rtd_theme sphinx_rtd_theme
testrepository>=0.0.18
testtools>=0.9.34

0
tests/__init__.py Normal file
View File

198
tests/test_titles.py Normal file
View File

@ -0,0 +1,198 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import glob
import re
import docutils.core
import testtools
OPTIONAL_SECTIONS = ("Upper level additional section",)
OPTIONAL_SUBSECTIONS = ("Some additional section",)
OPTIONAL_SUBSUBSECTIONS = ("Parameters", "Some additional section",)
OPTIONAL_FIELDS = ("Conventions",)
class TestTitles(testtools.TestCase):
def _get_title(self, section_tree, depth=1):
section = {
"subtitles": [],
}
for node in section_tree:
if node.tagname == "title":
section["name"] = node.rawsource
elif node.tagname == "section":
subsection = self._get_title(node, depth+1)
if depth < 2:
if subsection["subtitles"]:
section["subtitles"].append(subsection)
else:
section["subtitles"].append(subsection["name"])
elif depth == 2:
section["subtitles"].append(subsection["name"])
return section
def _get_titles(self, test_plan):
titles = {}
for node in test_plan:
if node.tagname == "section":
section = self._get_title(node)
titles[section["name"]] = section["subtitles"]
return titles
@staticmethod
def _get_docinfo(test_plan):
fields = []
for node in test_plan:
if node.tagname == "field_list":
for field in node:
for f_opt in field:
if f_opt.tagname == "field_name":
fields.append(f_opt.rawsource)
if node.tagname == "docinfo":
for info in node:
fields.append(info.tagname)
if node.tagname == "topic":
fields.append("abstract")
return fields
def _check_fields(self, tmpl, test_plan):
tmpl_fields = self._get_docinfo(tmpl)
test_plan_fields = self._get_docinfo(test_plan)
missing_fields = [f for f in tmpl_fields
if f not in test_plan_fields and
f not in OPTIONAL_FIELDS]
if len(missing_fields) > 0:
self.fail("While checking '%s':\n %s"
% (test_plan[0].rawsource,
"Missing fields: %s" % missing_fields))
def _check_titles(self, filename, expect, actual):
missing_sections = [x for x in expect.keys() if (
x not in actual.keys()) and (x not in OPTIONAL_SECTIONS)]
msgs = []
if len(missing_sections) > 0:
msgs.append("Missing sections: %s" % missing_sections)
for section in expect.keys():
missing_subsections = [x for x in expect[section]
if x not in actual.get(section, {}) and
(x not in OPTIONAL_SUBSECTIONS)]
extra_subsections = [x for x in actual.get(section, {})
if x not in expect[section]]
for ex_s in extra_subsections:
s_name = (ex_s if type(ex_s) is str or
type(ex_s) is unicode else ex_s["name"])
if s_name.startswith("Test Case"):
new_missing_subsections = []
for m_s in missing_subsections:
m_s_name = (m_s if type(m_s) is str or
type(m_s) is unicode
else m_s["name"])
if not m_s_name.startswith("Test Case"):
new_missing_subsections.append(m_s)
missing_subsections = new_missing_subsections
break
if len(missing_subsections) > 0:
msgs.append("Section '%s' is missing subsections: %s"
% (section, missing_subsections))
for subsection in expect[section]:
if type(subsection) is dict:
missing_subsubsections = []
actual_section = actual.get(section, {})
matching_actual_subsections = [
s for s in actual_section
if type(s) is dict and (
s["name"] == subsection["name"] or
(s["name"].startswith("Test Case") and
subsection["name"].startswith("Test Case")))
]
for actual_subsection in matching_actual_subsections:
for x in subsection["subtitles"]:
if (x not in actual_subsection["subtitles"] and
x not in OPTIONAL_SUBSUBSECTIONS):
missing_subsubsections.append(x)
if len(missing_subsubsections) > 0:
msgs.append("Subsection '%s' is missing "
"subsubsections: %s"
% (actual_subsection,
missing_subsubsections))
if len(msgs) > 0:
self.fail("While checking '%s':\n %s"
% (filename, "\n ".join(msgs)))
def _check_lines_wrapping(self, tpl, raw):
code_block = False
for i, line in enumerate(raw.split("\n")):
# NOTE(ndipanov): Allow code block lines to be longer than 79 ch
if code_block:
if not line or line.startswith(" "):
continue
else:
code_block = False
if "::" in line:
code_block = True
if "http://" in line or "https://" in line:
continue
# Allow lines which do not contain any whitespace
if re.match("\s*[^\s]+$", line):
continue
self.assertTrue(
len(line) < 80,
msg="%s:%d: Line limited to a maximum of 79 characters." %
(tpl, i + 1))
def _check_no_cr(self, tpl, raw):
matches = re.findall("\r", raw)
self.assertEqual(
len(matches), 0,
"Found %s literal carriage returns in file %s" %
(len(matches), tpl))
def _check_trailing_spaces(self, tpl, raw):
for i, line in enumerate(raw.split("\n")):
trailing_spaces = re.findall("\s+$", line)
self.assertEqual(
len(trailing_spaces), 0,
"Found trailing spaces on line %s of %s" % (i + 1, tpl))
def test_template(self):
with open("doc/source/test_plans/template.rst") as f:
template = f.read()
test_plan_tmpl = docutils.core.publish_doctree(template)
template_titles = self._get_titles(test_plan_tmpl)
files = glob.glob("doc/source/test_plans/*/*.rst")
for filename in files:
with open(filename) as f:
data = f.read()
test_plan = docutils.core.publish_doctree(data)
self._check_titles(filename,
template_titles,
self._get_titles(test_plan))
self._check_fields(test_plan_tmpl, test_plan)
self._check_lines_wrapping(filename, data)
self._check_no_cr(filename, data)
self._check_trailing_spaces(filename, data)

View File

@ -1,5 +1,5 @@
[tox] [tox]
envlist = docs envlist = docs,py27
minversion = 1.6 minversion = 1.6
skipsdist = True skipsdist = True
@ -11,6 +11,8 @@ setenv = VIRTUAL_ENV={envdir}
LANGUAGE=en_US:en LANGUAGE=en_US:en
LC_ALL=C LC_ALL=C
deps = -r{toxinidir}/requirements.txt deps = -r{toxinidir}/requirements.txt
commands =
python setup.py test --slowest --testr-args='{posargs}'
[testenv:venv] [testenv:venv]
commands = {posargs} commands = {posargs}