Added MQ performance test plans to docs
Change-Id: I3ab269c7e707d62918f29f03f0f272f9cdedae34
This commit is contained in:
parent
250caed419
commit
d08f0f02c1
@ -6,6 +6,7 @@ Performance Documentation
|
||||
:maxdepth: 3
|
||||
|
||||
methodology/hyper-scale.rst
|
||||
test_plans/index.rst
|
||||
|
||||
.. raw:: pdf
|
||||
|
||||
|
12
doc/source/test_plans/index.rst
Normal file
12
doc/source/test_plans/index.rst
Normal file
@ -0,0 +1,12 @@
|
||||
.. raw:: pdf
|
||||
|
||||
PageBreak oneColumn
|
||||
|
||||
=======================
|
||||
Test Plans
|
||||
=======================
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
mq/index
|
8
doc/source/test_plans/mq/index.rst
Normal file
8
doc/source/test_plans/mq/index.rst
Normal file
@ -0,0 +1,8 @@
|
||||
Message Queue Test Plan
|
||||
=======================
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
setup
|
||||
test_cases
|
32
doc/source/test_plans/mq/rabbitmq.config
Normal file
32
doc/source/test_plans/mq/rabbitmq.config
Normal file
@ -0,0 +1,32 @@
|
||||
[
|
||||
{rabbit, [
|
||||
{cluster_partition_handling, autoheal},
|
||||
{default_permissions, [<<".*">>, <<".*">>, <<".*">>]},
|
||||
{default_vhost, <<"/">>},
|
||||
{log_levels, [{connection,info}]},
|
||||
{mnesia_table_loading_timeout, 10000},
|
||||
{tcp_listen_options, [
|
||||
binary,
|
||||
{packet, raw},
|
||||
{reuseaddr, true},
|
||||
{backlog, 128},
|
||||
{nodelay, true},
|
||||
{exit_on_close, false},
|
||||
{keepalive, true}
|
||||
]},
|
||||
{default_user, <<"stackrabbit">>},
|
||||
{default_pass, <<"password">>}
|
||||
]},
|
||||
{kernel, [
|
||||
{inet_default_connect_options, [{nodelay,true}]},
|
||||
{inet_dist_listen_max, 41055},
|
||||
{inet_dist_listen_min, 41055},
|
||||
{net_ticktime, 10}
|
||||
]}
|
||||
,
|
||||
{rabbitmq_management, [
|
||||
{listener, [
|
||||
{port, 15672}
|
||||
]}
|
||||
]}
|
||||
].
|
253
doc/source/test_plans/mq/setup.rst
Normal file
253
doc/source/test_plans/mq/setup.rst
Normal file
@ -0,0 +1,253 @@
|
||||
Test Setup
|
||||
----------
|
||||
|
||||
This section describes the setup for message queue testing. It can be either
|
||||
a single (all-in-one) or a multi-node installation.
|
||||
|
||||
A single-node setup requires just one node to be up and running. It has
|
||||
both compute and controller roles and all OpenStack services run on this node.
|
||||
This setup does not support hardware scaling or workload distribution tests.
|
||||
|
||||
A basic multi-node setup with RabbitMQ or ActiveMQ comprises 5 physical nodes:
|
||||
* One node for a compute node. This node simulates activity which is
|
||||
typical for OpenStack compute components.
|
||||
* One node for a controller node. This node simulates activity which
|
||||
is typical for OpenStack control plane services.
|
||||
* Three nodes are allocated for the MQ cluster.
|
||||
|
||||
When using ZeroMQ, the basic multi-node setup can be reduced to two physical nodes.
|
||||
* One node for a compute node as above.
|
||||
* One node for a controller node. This node also acts as a Redis host
|
||||
for match making purposes.
|
||||
|
||||
|
||||
RabbitMQ Installation and Configuration
|
||||
---------------------------------------
|
||||
|
||||
* Install RabbitMQ server package:
|
||||
``sudo apt-get install rabbitmq-server``
|
||||
* Configure RabbitMQ on each node ``/etc/rabbitmq/rabbitmq.config``:
|
||||
|
||||
.. literalinclude:: rabbitmq.config
|
||||
:language: erlang
|
||||
|
||||
..
|
||||
|
||||
* Stop RabbitMQ on nodes 2 and 3:
|
||||
``sudo service rabbitmq-server stop``
|
||||
* Make Erlang cookies on nodes 2 and 3 the same as on node 1:
|
||||
``/var/lib/rabbitmq/.erlang.cookie``
|
||||
* Start RabbitMQ server:
|
||||
``sudo service rabbitmq-server start``
|
||||
* Stop RabbitMQ services, but leave Erlang:
|
||||
``sudo rabbitmqctl stop_app``
|
||||
* Join nodes 2 and 3 nodes to node 1:
|
||||
``rabbitmqctl join_cluster rabbit@node-1``
|
||||
* Start app on nodes 2 and 3:
|
||||
``sudo rabbitmqctl start_app``
|
||||
* Add needed user:
|
||||
|
||||
``sudo rabbitmqctl add_user stackrabbit password``
|
||||
``sudo rabbitmqctl set_permissions stackrabbit ".*" ".*" ".*"``
|
||||
|
||||
|
||||
ActiveMQ Installation and Configuration
|
||||
---------------------------------------
|
||||
|
||||
This section describes installation and configuration steps for an ActiveMQ
|
||||
message queue implementation. ActiveMQ is based on Java technologies so it
|
||||
requires a Java runtime. Actual performance will depend on the Java
|
||||
version as well as the hardware specification. The following steps should be
|
||||
performed for an ActiveMQ installation:
|
||||
|
||||
|
||||
* Install Java on nodes node-1, node-2 and node-3:
|
||||
``sudo apt-get install default-jre``
|
||||
* Download the latest ActiveMQ binary:
|
||||
``wget http://www.eu.apache.org/dist/activemq/5.12.0/apache-activemq-5.12.0-bin.tar.gz``
|
||||
* Unzip the archive:
|
||||
``tar zxvf apache-activemq-5.12.0-bin.tar.gz``
|
||||
* Install everything needed for ZooKeeper:
|
||||
|
||||
* download ZK binaries: ``wget http://www.eu.apache.org/dist/zookeeper/zookeeper-3.4.6/zookeeper-3.4.6.tar.gz``
|
||||
* unzip the archive: ``tar zxvf zookeeper-3.4.6.tar.gz``
|
||||
* create ``/home/ubuntu/zookeeper-3.4.6/conf/zoo.cfg`` file:
|
||||
|
||||
.. literalinclude:: zoo.cfg
|
||||
:language: ini
|
||||
|
||||
.. note::
|
||||
|
||||
Here 10.4.1.x are the IP addresses of the ZooKeeper nodes where ZK is
|
||||
installed. ZK will be run in cluster mode with majority voting, so at least 3 nodes
|
||||
are required.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
tickTime=2000
|
||||
dataDir=/home/ubuntu/zookeeper-3.4.6/data
|
||||
dataLogDir=/home/ubuntu/zookeeper-3.4.6/logs
|
||||
clientPort=2181
|
||||
initLimit=10
|
||||
syncLimit=5
|
||||
server.1=10.4.1.107:2888:3888
|
||||
server.2=10.4.1.119:2888:3888
|
||||
server.3=10.4.1.111:2888:3888
|
||||
|
||||
* create dataDir and dataLogDir directories
|
||||
* for each MQ node create a myid file in dataDir with the id of the
|
||||
server and nothing else. For node-1 the file will contain one line with 1,
|
||||
node-2 with 2, and node-3 with 3.
|
||||
* start ZooKeeper (on each node): \textbf{./zkServer.sh start}
|
||||
* check ZK status with: \textbf{./zkServer.sh status}
|
||||
* Configure ActiveMQ (apache-activemq-5.12.0/conf/activemq.xml file - set
|
||||
the hostname parameter to the node address)
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
<broker brokerName="broker" ... >
|
||||
...
|
||||
<persistenceAdapter>
|
||||
<replicatedLevelDB
|
||||
directory="activemq-data"
|
||||
replicas="3"
|
||||
bind="tcp://0.0.0.0:0"
|
||||
zkAddress="10.4.1.107:2181,10.4.1.111:2181,10.4.1.119:2181"
|
||||
zkPassword="password"
|
||||
zkPath="/activemq/leveldb-stores"
|
||||
hostname="10.4.1.107"
|
||||
/>
|
||||
</persistenceAdapter>
|
||||
|
||||
<plugins>
|
||||
<simpleAuthenticationPlugin>
|
||||
<users>
|
||||
<authenticationUser username="stackrabbit" password="password"
|
||||
groups="users,guests,admins"/>
|
||||
</users>
|
||||
</simpleAuthenticationPlugin>
|
||||
</plugins>
|
||||
...
|
||||
</broker>
|
||||
|
||||
After ActiveMQ is installed and configured it can be started with the command:
|
||||
:command:./activemq start or ``./activemq console`` for a foreground process.
|
||||
|
||||
|
||||
Oslo.messaging ActiveMQ Driver
|
||||
------------------------------
|
||||
|
||||
All OpenStack changes (in the oslo.messaging library) to support ActiveMQ are already
|
||||
merged to the upstream repository. The relevant changes can be found in the
|
||||
amqp10-driver-implementation topic.
|
||||
To run ActiveMQ even on the most basic all-in-one topology deployment the
|
||||
following requirements need to be satisfied:
|
||||
|
||||
* Java JRE must be installed in the system. The Java version can be checked with the
|
||||
command ``java -version``. If java is not installed an error message will
|
||||
appear. Java can be installed with the following command:
|
||||
``sudo apt-get install default-jre``
|
||||
|
||||
* ActiveMQ binaries should be installed in the system. See
|
||||
http://activemq.apache.org/getting-started.html for installation instructions.
|
||||
The latest stable version is currently
|
||||
http://apache-mirror.rbc.ru/pub/apache/activemq/5.12.0/apache-activemq-5.12.0-bin.tar.gz.
|
||||
|
||||
* To use the OpenStack oslo.messaging amqp 1.0 driver, the following Python libraries
|
||||
need to be installed:
|
||||
``pip install "pyngus$>=$1.0.0,$<$2.0.0"``
|
||||
``pip install python-qpid-proton``
|
||||
|
||||
* All OpenStack projects configuration files containing the line
|
||||
``rpc_backend = rabbit`` need to be modified to replace this line with
|
||||
``rpc_backend = amqp``, and then all the services need to be restarted.
|
||||
|
||||
ZeroMQ Installation
|
||||
-------------------
|
||||
|
||||
This section describes installation steps for ZeroMQ. ZeroMQ (also ZMQ or 0MQ)
|
||||
is an embeddable networking library but acts like a concurrency framework.
|
||||
Unlike other AMQP-based drivers, such as RabbitMQ, ZeroMQ doesn’t have any central brokers in
|
||||
oslo.messaging. Instead, each host (running OpenStack services) is both a ZeroMQ client and
|
||||
a server. As a result, each host needs to listen to a certain TCP port for incoming connections
|
||||
and directly connect to other hosts simultaneously.
|
||||
|
||||
To set up ZeroMQ, only one step needs to be performed.
|
||||
|
||||
* Install python bindings for ZeroMQ. All necessary packages will be installed as dependencies:
|
||||
``sudo apt-get install python-zmq``
|
||||
|
||||
.. note::
|
||||
|
||||
python-zmq version should be at least 14.0.1
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
python-zmq
|
||||
Depends: <python:any>
|
||||
python
|
||||
Depends: python
|
||||
Depends: python
|
||||
Depends: libc6
|
||||
Depends: libzmq3
|
||||
|
||||
Oslo.messaging ZeroMQ Driver
|
||||
----------------------------
|
||||
All OpenStack changes (in the oslo.messaging library) to support ZeroMQ are already
|
||||
merged to the upstream repository. You can find the relevant changes in the
|
||||
zmq-patterns-usage topic.
|
||||
To run ZeroMQ on the most basic all-in-one topology deployment the
|
||||
following requirements need to be satisfied:
|
||||
|
||||
* Python ZeroMQ bindings must be installed in the system.
|
||||
|
||||
* Redis binaries should be installed in the system. See
|
||||
http://redis.io/download for instructions and details.
|
||||
|
||||
.. note::
|
||||
|
||||
The following changes need to be applied to all OpenStack project configuration files.
|
||||
|
||||
* To enable the driver, in the section [DEFAULT] of each configuration file, the ‘rpc_backend’
|
||||
flag must be set to ‘zmq’ and the ‘rpc_zmq_host’ flag must be set to the hostname
|
||||
of the node.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
[DEFAULT]
|
||||
rpc_backend = zmq
|
||||
rpc_zmq_host = myopenstackserver.example.com
|
||||
|
||||
|
||||
* Set Redis as a match making service.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
[DEFAULT]
|
||||
rpc_zmq_matchmaker = redis
|
||||
|
||||
[matchmaker_redis]
|
||||
host = 127.0.0.1
|
||||
port = 6379
|
||||
password = None
|
||||
|
||||
Running ZeroMQ on a multi-node setup
|
||||
------------------------------------
|
||||
The process of setting up oslo.messaging with ZeroMQ on a multi-node environment is very similar
|
||||
to the all-in-one installation.
|
||||
|
||||
* On each node ``rpc_zmq_host`` should be set to its FQDN.
|
||||
* Redis-server should be up and running on a controller node or a separate host.
|
||||
Redis can be used with master-slave replication enabled, but currently the oslo.messaging ZeroMQ driver
|
||||
does not support Redis Sentinel, so it is not yet possible to achieve high availability, automatic failover,
|
||||
and fault tolerance.
|
||||
|
||||
The ``host`` parameter in section ``[matchmaker_redis]`` should be set to the IP address of a host which runs
|
||||
a master Redis instance, e.g.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
[matchmaker_redis]
|
||||
host = 10.0.0.3
|
||||
port = 6379
|
||||
password = None
|
99
doc/source/test_plans/mq/test_cases.rst
Normal file
99
doc/source/test_plans/mq/test_cases.rst
Normal file
@ -0,0 +1,99 @@
|
||||
Test Cases
|
||||
==========
|
||||
|
||||
Test Case 1: Message Queue Throughput Test
|
||||
------------------------------------------
|
||||
|
||||
**Description**
|
||||
|
||||
This test measures the aggregate throughput of a MQ layer by using the oslo.messaging
|
||||
simulator tool. Either RabbitMQ, ActiveMQ, or ZeroMQ can be used as the MQ layer.
|
||||
Throughput is calculated as the sum
|
||||
over the MQ clients of the throughput for each client. For each test the number of
|
||||
clients/threads is configured to one of the specific values defined in the test case
|
||||
parameters section. The full set of tests will cover all the "Threads count" values shown,
|
||||
plus additional values as needed to quantify the dependence of MQ throughput on load, and
|
||||
to find the maximum throughput.
|
||||
|
||||
**Parameters**
|
||||
|
||||
======================= =====
|
||||
Parameter name Value
|
||||
======================= =====
|
||||
oslo.messaging version 2.5.0
|
||||
simulator.py version 1.0
|
||||
Threads count 50, 70, 100
|
||||
======================= =====
|
||||
|
||||
**Measurements**
|
||||
|
||||
========== ================ ===========
|
||||
Value Measurment Units Description
|
||||
========== ================ ===========
|
||||
Throughput msg/sec Directly measured by simulator tool
|
||||
========== ================ ===========
|
||||
|
||||
**Result Type**
|
||||
|
||||
================ ======================= =========================
|
||||
Result type Measurement Units Description
|
||||
================ ======================= =========================
|
||||
Throughput Value msg/sec Table of numerical values
|
||||
Throughput Graph msg/sec vs # of threads Graph
|
||||
================ ======================= =========================
|
||||
|
||||
**Additional Measurements**
|
||||
|
||||
=========== ======= =============================
|
||||
Measurement Units Description
|
||||
=========== ======= =============================
|
||||
Variance msg/sec Throughput variance over time
|
||||
=========== ======= =============================
|
||||
|
||||
Test Case 2: OMGBenchmark Rally test
|
||||
------------------------------------
|
||||
|
||||
**Description**
|
||||
|
||||
OMGBenchmark is a rally plugin for benchmarking oslo.messaging.
|
||||
The plugin and installation instructions are available on github:
|
||||
https://github.com/Yulya/omgbenchmark
|
||||
|
||||
**Parameters**
|
||||
|
||||
================================= =============== =====
|
||||
Parameter name Rally name Value
|
||||
================================= =============== =====
|
||||
oslo.messaging version 2.5.0
|
||||
Number of iterations times 50, 100, 500
|
||||
Threads count concurrency 40, 70, 100
|
||||
Number of RPC servers num_servers 10
|
||||
Number of RPC clients num_clients 10
|
||||
Number of topics num_topics 5
|
||||
Number of messages per iteration num_messages 100
|
||||
Message size msg_length_file 900-12000 bytes
|
||||
================================= =============== =====
|
||||
|
||||
**Measurements**
|
||||
|
||||
======= ================= ==========================================
|
||||
Name Measurement Units Description
|
||||
======= ================= ==========================================
|
||||
min sec Minimal execution time of one iteration
|
||||
median sec Median execution time
|
||||
90%ile sec 90th percentile execution time
|
||||
95%ile sec 95th percentile execution time
|
||||
max sec Maximal execution time of one iteration
|
||||
avg sec Average execution time
|
||||
success none Number of successfully finished iterations
|
||||
count none Number of executed iterations
|
||||
======= ================= ==========================================
|
||||
|
||||
**Result Type**
|
||||
|
||||
================= ======================= =========================
|
||||
Result type Measurement Units Description
|
||||
================= ======================= =========================
|
||||
Throughput Graph msg size vs median Graph
|
||||
Concurrency Graph concurrency vs median Graph
|
||||
================= ======================= =========================
|
9
doc/source/test_plans/mq/zoo.cfg
Normal file
9
doc/source/test_plans/mq/zoo.cfg
Normal file
@ -0,0 +1,9 @@
|
||||
tickTime=2000
|
||||
dataDir=/home/ubuntu/zookeeper-3.4.6/data
|
||||
dataLogDir=/home/ubuntu/zookeeper-3.4.6/logs
|
||||
clientPort=2181
|
||||
initLimit=10
|
||||
syncLimit=5
|
||||
server.1=10.4.1.107:2888:3888
|
||||
server.2=10.4.1.119:2888:3888
|
||||
server.3=10.4.1.111:2888:3888
|
Loading…
Reference in New Issue
Block a user