Refactor Shaker

Refactoring lib/Shaker to improve compatibility with a wide range of
shaker scenarios.  set_scenario() method has been fixed to remove hard
coded expectations on the order of accommodation list. send_to_elastic()
method has also been fixed accordingly for imporved compatibility when
indexing a large number of shaker scenarios.

Why?  The current model of overwriting the shaker scenario file with
values supplied from the browbeat-config works well for a major set of
shaker scenario files. However, the problem comes with  how we
overwrite/work with "accomodation" key in the shaker scenario. Since the
value for the accomodation is a list, we need to access the list items
to modify them[2]. Most scenario files sudh as [1] have 4 values in the
list but some such as [3] have only 3 items, so we cannot be sure the
list item we are accessing is the one we want to if we are going by list
item number.

How?  Added two methods accommodation_to_dict() and
accommodation_to_list().

accommodation_to_dict() grabs the accommodation data from shaker
scenario file[1] and converts the list to a dictionary. Since it is now
a dictionary we overwrite shaker scenario values for key "accommodation"
by checking what keys exist in the dictionary.

accommodation_to_list() converts the dictionary we created by
overwrirting arguments in shaker scenario with those in browbeat
scenario to a list again, so that it can be written back the shaker
scenario file. Shaker eventually consumes this file which has been
overwritten by the options in browbeat config.

+ Adding external_host parameter
+ Adding validation
+ Adding usage docs
+ RST Formatting

[1] - https://github.com/openstack/shaker/blob/master/shaker/scenarios/openstack/dense_l3_north_south.yaml#L11
[2] - https://github.com/openstack/browbeat/blob/master/lib/Shaker.py#L201
[3] - https://github.com/openstack/shaker/blob/master/shaker/scenarios/openstack/external/dense_l3_north_south_with_fip.yaml#L11

Change-Id: Icf7208f230cbe727d525b6cb090e82c4f19d6985
This commit is contained in:
Sai Sindhur Malleni 2016-09-14 15:26:32 -04:00
parent bd943baa07
commit 2c1980e023
7 changed files with 224 additions and 37 deletions

View File

@ -278,6 +278,7 @@ shaker:
sleep_after: 5
venv: /home/stack/shaker-venv
shaker_region: regionOne
external_host: 2.2.2.2
scenarios:
- name: l2-4-1
enabled: true

View File

@ -66,6 +66,7 @@ shaker:
sleep_after: 5
venv: /home/stack/shaker-venv
shaker_region: regionOne
external_host: 2.2.2.2
scenarios:
- name: l2
enabled: true

View File

@ -43,9 +43,14 @@ From your local machine
$ vi install/group_vars/all.yml # Make sure to edit the dns_server to the correct ip address
$ ansible-playbook -i hosts install/browbeat.yml
$ vi install/group_vars/all.yml # Edit Browbeat network settings
$ ansible-playbook -i hosts install/browbeat_network.yml
$ ansible-playbook -i hosts install/browbeat_network.yml # For external access(required to build Shaker image)
$ ansible-playbook -i hosts install/shaker_build.yml
.. note:: ``browbeat-network.yml`` will more than likely not work for you
depending on your underlay/overlay network setup. In such cases, user needs
to create appropriate networks for instances to allow them to reach the
internet.
(Optional) Install collectd
~~~~~~~~~~~~~~~~~~~~~~~~~~~

View File

@ -68,6 +68,82 @@ browbeat-config.yaml:
(browbeat-venv)[stack@ospd browbeat]$ ./browbeat.py perfkit -s browbeat-config.yaml
Running Shaker
==============
Running Shaker requires the shaker image to be built, which in turn requires
instances to be able to access the internet. The playbooks for this installation
have been described in the installation documentation but for the sake of
convenience they are being mentioned here as well.
::
$ ansible-playbook -i hosts install/browbeat_network.yml
$ ansible-playbook -i hosts install/shaker_build.yml
.. note:: The playbook to setup networking is provided as an example only and
might not work for you based on your underlay/overlay network setup. In such
cases, the exercise of setting up networking for instances to be able to access
the internet is left to the user.
Once the shaker image is built, you can run Shaker via Browbeat by filling in a
few configuration options in the configuration file. The meaning of each option is
summarized below:
**shaker:**
:enabled: Boolean ``true`` or ``false``, enable shaker or not
:server: IP address of the shaker-server for agent to talk to (undercloud IP
by default)
:port: Port to connect to the shaker-server (undercloud port 5555 by default)
:flavor: OpenStack instance flavor you want to use
:join_timeout: Timeout in seconds for agents to join
:sleep_before: Time in seconds to sleep before executing a scenario
:sleep_after: Time in seconds to sleep after executing a scenario
:venv: venv to execute shaker commands in, ``/home/stack/shaker-venv`` by
default
:shaker_region: OpenStack region you want to use
:external_host: IP of a server for external tests (should have
``browbeat/util/shaker-external.sh`` executed on it previously and have
iptables/firewalld/selinux allowing connections on the ports used by network
testing tools netperf and iperf)
**scenarios:** List of scenarios you want to run
:\- name: Name for the scenario. It is used to create directories/files
accordingly
:enabled: Boolean ``true`` or ``false`` depending on whether or not you
want to execute the scenario
:density: Number of instances
:compute: Number of compute nodes across which to spawn instances
:placement: ``single_room`` would mean one instance per compute node and
``double_room`` would give you two instances per compute node
:progression: ``null`` means all agents are involved, ``linear`` means
execution starts with one agent and increases linearly, ``quadratic``
would result in quadratic growth in number of agents participating
in the test concurrently
:time: Time in seconds you want each test in the scenario
file to run
:file: The base shaker scenario file to use to override
options (this would depend on whether you want to run L2, L3 E-W or L3
N-S tests and also on the class of tool you want to use such as flent or
iperf3)
To analyze results sent to Elasticsearch (you must have Elasticsearch enabled
and the IP of the Elasticsearch host provided in the browbeat configuration
file), you can use the following playbook to setup some prebuilt dashboards for
you:
::
$ ansible-playbook -i hosts install/kibana-visuals.yml
Alternatively you can create your own visualizations of specific shaker runs
using some simple searches such as:
::
shaker_uuid: 97092334-34e8-446c-87d6-6a0f361b9aa8 AND record.concurrency: 1 AND result.result_type: bandwidth
shaker_uuid: c918a263-3b0b-409b-8cf8-22dfaeeaf33e AND record.concurrency:1 AND record.test:Bi-Directional
Working with Multiple Clouds
============================

View File

@ -56,6 +56,26 @@ class Shaker(WorkloadBase.WorkloadBase):
"Current number of Shaker tests failed: {}".format(
self.error_count))
def accommodation_to_dict(self, accommodation):
accommodation_dict = {}
for item in accommodation:
if isinstance(item, dict):
accommodation_dict.update(item)
else:
accommodation_dict[item] = True
return accommodation_dict
def accommodation_to_list(self, accommodation):
accommodation_list = []
for key, value in accommodation.iteritems():
if value is True:
accommodation_list.append(key)
else:
temp_dict = {}
temp_dict[key] = value
accommodation_list.append(temp_dict)
return accommodation_list
def final_stats(self, total):
self.logger.info(
"Total Shaker scenarios enabled by user: {}".format(total))
@ -120,19 +140,29 @@ class Shaker(WorkloadBase.WorkloadBase):
'shaker_test_info']['execution']:
shaker_test_meta['shaker_test_info'][
'execution']['progression'] = "all"
var = data['scenarios'][scenario][
'deployment'].pop('accommodation')
accommodation = self.accommodation_to_dict(data['scenarios'][scenario][
'deployment'].pop('accommodation'))
if 'deployment' not in shaker_test_meta:
shaker_test_meta['deployment'] = {}
shaker_test_meta['deployment']['accommodation'] = {}
if 'single' in accommodation:
shaker_test_meta['deployment'][
'accommodation']['distribution'] = var[0]
'accommodation']['distribution'] = 'single'
elif 'pair' in accommodation:
shaker_test_meta['deployment'][
'accommodation']['placement'] = var[1]
'accommodation']['distribution'] = 'pair'
if 'single_room' in accommodation:
shaker_test_meta['deployment'][
'accommodation']['placement'] = 'single_room'
elif 'double_room' in accommodation:
shaker_test_meta['deployment'][
'accommodation']['placement'] = 'double_room'
if 'density' in accommodation:
shaker_test_meta['deployment']['accommodation'][
'density'] = var[2]['density']
'density'] = accommodation['density']
if 'compute_nodes' in accommodation:
shaker_test_meta['deployment']['accommodation'][
'compute_nodes'] = var[3]['compute_nodes']
'compute_nodes'] = accommodation['compute_nodes']
shaker_test_meta['deployment']['template'] = data[
'scenarios'][scenario]['deployment']['template']
# Iterating through each record to get result values
@ -213,25 +243,32 @@ class Shaker(WorkloadBase.WorkloadBase):
stream = open(fname, 'r')
data = yaml.load(stream)
stream.close()
default_placement = "double_room"
default_density = 1
default_compute = 1
default_progression = "linear"
if "placement" in scenario:
data['deployment']['accommodation'][1] = scenario['placement']
accommodation = self.accommodation_to_dict(data['deployment']['accommodation'])
if 'placement' in scenario and any(k in accommodation for k in ('single_room',
'double_room')):
if 'single_room' in accommodation and scenario['placement'] == 'double_room':
accommodation.pop('single_room', None)
accommodation['double_room'] = True
elif 'double_room' in accommodation and scenario['placement'] == 'single_room':
accommodation['single_room'] = True
accommodation.pop('double_room', None)
else:
data['deployment']['accommodation'][1] = default_placement
if "density" in scenario:
data['deployment']['accommodation'][
2]['density'] = scenario['density']
else:
data['deployment']['accommodation'][2]['density'] = default_density
if "compute" in scenario:
data['deployment']['accommodation'][3][
'compute_nodes'] = scenario['compute']
else:
data['deployment']['accommodation'][3][
'compute_nodes'] = default_compute
accommodation['double_room'] = True
accommodation.pop('single_room', None)
if 'density' in scenario and 'density' in accommodation:
accommodation['density'] = scenario['density']
elif 'density' in accommodation:
accommodation['density'] = default_density
if "compute" in scenario and 'compute_nodes' in accommodation:
accommodation['compute_nodes'] = scenario['compute']
elif 'compute_nodes' in accommodation:
accommodation['compute_nodes'] = default_compute
accommodation = self.accommodation_to_list(accommodation)
self.logger.debug("Using accommodation {}".format(accommodation))
data['deployment']['accommodation'] = accommodation
if "progression" in scenario:
if scenario['progression'] is None:
data['execution'].pop('progression', None)
@ -245,6 +282,7 @@ class Shaker(WorkloadBase.WorkloadBase):
else:
for test in data['execution']['tests']:
test['time'] = default_time
self.logger.debug("Execution time of each test set to {}".format(test['time']))
with open(fname, 'w') as yaml_file:
yaml_file.write(yaml.dump(data, default_flow_style=False))
@ -315,16 +353,29 @@ class Shaker(WorkloadBase.WorkloadBase):
timeout = self.config['shaker']['join_timeout']
self.logger.info(
"The uuid for this shaker scenario is {}".format(shaker_uuid))
cmd_1 = (
cmd_env = (
"source {}/bin/activate; source /home/stack/overcloudrc").format(venv)
cmd_2 = (
"shaker --server-endpoint {0}:{1} --flavor-name {2} --scenario {3}"
" --os-region-name {7} --agent-join-timeout {6}"
" --report {4}/{5}.html --output {4}/{5}.json"
" --book {4}/{5} --debug > {4}/{5}.log 2>&1").format(
server_endpoint, port_no, flavor, filename,
result_dir, test_name, timeout, shaker_region)
cmd = ("{}; {}").format(cmd_1, cmd_2)
if 'external' in filename and 'external_host' in self.config['shaker']:
external_host = self.config['shaker']['external_host']
cmd_shaker = (
'shaker --server-endpoint {0}:{1} --flavor-name {2} --scenario {3}'
' --os-region-name {7} --agent-join-timeout {6}'
' --report {4}/{5}.html --output {4}/{5}.json'
' --book {4}/{5} --matrix "{{host: {8}}}" --debug'
' > {4}/{5}.log 2>&1').format(server_endpoint,
port_no, flavor, filename, result_dir,
test_name, timeout, shaker_region,
external_host)
else:
cmd_shaker = (
'shaker --server-endpoint {0}:{1} --flavor-name {2} --scenario {3}'
' --os-region-name {7} --agent-join-timeout {6}'
' --report {4}/{5}.html --output {4}/{5}.json'
' --book {4}/{5} --debug'
' > {4}/{5}.log 2>&1').format(server_endpoint, port_no, flavor,
filename, result_dir, test_name,
timeout, shaker_region)
cmd = ("{}; {}").format(cmd_env, cmd_shaker)
from_ts = int(time.time() * 1000)
if 'sleep_before' in self.config['shaker']:
time.sleep(self.config['shaker']['sleep_before'])

View File

@ -206,6 +206,10 @@ mapping:
shaker_region:
type: str
required: true
external_host:
type: str
required: False
pattern: ^([a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9\-]{0,61}[a-zA-Z0-9])(\.([a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9\-]{0,61}[a-zA-Z0-9]))*$
scenarios:
type: seq
sequence:

49
utils/shaker-external.sh Executable file
View File

@ -0,0 +1,49 @@
#!/bin/bash
# Run as root to setup a shaker-server to run external network tests with
yum install -y epel-release
yum install -y wget iperf iperf3 gcc gcc-c++ python-devel screen zeromq zeromq-devel
wget ftp://ftp.netperf.org/netperf/netperf-2.7.0.tar.gz
tar xvzf netperf-2.7.0.tar.gz
pushd netperf-2.7.0
./configure --enable-demo=yes
make
make install
popd
easy_install pip
pip install pbr flent pyshaker-agent
cat<<'EOF' >> /etc/systemd/system/iperf.service
[Unit]
Description=iperf Service
After=network.target
[Service]
Type=simple
ExecStart=/usr/bin/iperf -s
[Install]
WantedBy=multi-user.target
EOF
cat<<'EOF' >> /etc/systemd/system/iperf3.service
[Unit]
Description=iperf3 Service
After=network.target
[Service]
Type=simple
ExecStart=/usr/bin/iperf3 -s
[Install]
WantedBy=multi-user.target
EOF
cat<<'EOF' >> /etc/systemd/system/netperf.service
[Unit]
Description="Netperf netserver daemon"
After=network.target
[Service]
ExecStart=/usr/local/bin/netserver -D
[Install]
WantedBy=multi-user.target
EOF
systemctl start iperf
systemctl enable iperf
systemctl start iperf3
systemctl enable iperf3
systemctl start netperf
systemctl enable netperf