Elastic import in rally-uperf

Fixing an importing issue.

Change-Id: Idf8a942b9a3899d69b7dfbed491d55cb47b5f91c
This commit is contained in:
Joe Talerico 2017-10-02 12:59:07 -04:00
parent 4f78248b3e
commit 3f20042668
10 changed files with 7 additions and 436 deletions

View File

@ -7,7 +7,7 @@
command: virtualenv {{ rally_venv }} creates={{ rally_venv }}
- name: Rally Add browbeat to Python path
shell: echo 'export PYTHONPATH=$PYTHONPATH:{{ browbeat_path }}/lib' >> {{ rally_venv }}/bin/activate
shell: echo 'export PYTHONPATH=$PYTHONPATH:{{ browbeat_path }}' >> {{ rally_venv }}/bin/activate
- name: Setup rally-venv CA certificate path
lineinfile:

View File

@ -1,6 +1,6 @@
#!/bin/bash
sudo echo "nameserver {{ dns_server }}" > /etc/resolv.conf
sudo wget -O /etc/yum.repos.d/pbench.repo "{{ pbench_internal_url }}"
sudo curl -o /etc/yum.repos.d/pbench.repo "{{ pbench_internal_url }}"
sudo cat << EOF >> /etc/yum.repos.d/pbench.repo
# Template file to be used with ansible playbook "pbench-repo.yml"
@ -25,5 +25,6 @@ sudo yum install -y pbench-sysstat
sudo yum install -y pbench-uperf
sudo sed -i 's/disable_root: 1/disable_root: 0/g' /etc/cloud/cloud.cfg
cat /etc/cloud/cloud.cfg | grep disable_root
sudo sed -i 's/^no-port.*;sleep 10" //g' /root/.ssh/authorized_keys
echo "Browbeat workload installed"

View File

@ -1,102 +0,0 @@
Browbeat Rally Plugin: nova-create-pbench-uperf
================================================
Warning:
--------
Please review the "To make this work" section. Skipping any steps will result in failure.
Note:
-----
We do not support more then a single concurrency, and single time.
YML Config:
-----------
This section with describe the args in the nova-create-pbench-uperf.yml
.. code-block:: yaml
image:
name: 'pbench-image'
flavor:
name: 'm1.small'
zones:
server: 'nova:hypervisor-1'
client: 'nova:hypervisor-2'
external:
name: "public"
user: "root"
password: "100yard-"
test_types: "stream"
protocols: "tcp"
samples: 1
test_name: "pbench-uperf-test"
**Starting from the top:**
**`image: name:`** This is the image that you want to Rally to launch in the cloud, this guest should have pbench pre-installed.
**`flavor: name:`** is the size of the guest you want rally to launch. For the sake of being simple
**`zones: server: client:`** This is where you want the guests to be pinned to. This can be the same hypervisor.
**`external: name:`** name of the public network which will be attached to a router that Rally creates.
**`user:`** the user to login to the remote instances
**`password:`** not totally necessary, but the password for the user above.
**`test_types:`** the tests for pbench-uperf to run (stream|rr)
**`protocols:`** which protocols to run through (tcp|udp)
**`test_name:`** give the test a name
Before you begin:
-----------------
1. Create a pbench-image that has PBench preinstalled into the guest.
1a. Use http://www.x86.trystack.org/dashboard/static/rook/centos-noreqtty.qcow2 image
1b. You can use : helper-script/pbench-user.file
2a. This will not setup the image for root access
2. Rally cannot use a snapshot to launch the guest, so export the image you created above, and re-import it.
3. Configure the nova-create-pbench-uperf.yml with the right params.
Rally Standup:
--------------
Rally will build the following:
1. Create Router
2. Create Network/Subnet
3. Set Router gateway to provided Public network
4. Attached newly created network/subnet to newly created Router.
Functions:
----------
1. Launch a PBench Jumphost, assign a floating IP to the Jump Host so Rally can reach it.
2. Launch a pair of guests
3. Run PBench-uperf between the pair of guests
4. Send results
What this sets up:
------------------
.. image:: nova-create-pbench-uperf.png
What do you get out of this?
----------------------------
Here is example output from this work : https://gist.github.com/jtaleric/36b7fbbe93dfcb8f00cced221b366bb0
To make this work:
------------------
- PBench is only _verfied_ to work with root, so the user MUST be root. sudo will also not work.
root is _ONLY_ needed within the guests that are launched within the cloud
- Must update on the controller(s) `/etc/neutron/policy.json` ::
create_router:external_gateway_info:enable_snat": "rule:regular_user",
- Must update on the controller(s) `/etc/nova/policy.json` ::
"os_compute_api:servers:create:forced_host": "",
* Most recently OpenStack Newton Nova switched to having a default `policy.json` so the file will be blank. Simply add this rule above.

View File

@ -1,6 +0,0 @@
Helper Scripts
==============
The nova user-file, this file is incomplete, intentionally. The user must provide their internal PBench repo, which will have their PBench creds.
Also, the DNS might need to be updated, if the user doesn't want the google DNS injected into their guest.

View File

@ -1,32 +0,0 @@
#!/bin/bash
sudo echo "nameserver 8.8.8.8" > /etc/resolv.conf
sudo cat << EOF >> /etc/yum.repos.d/pbench.repo
# Template file to be used with ansible playbook "pbench-repo.yml"
###########################################################################
[pbench]
name=Pbench 7Server - x86_64
baseurl=http://my-pbench.com/repo/7Server/
arch=x86_64
enabled=1
gpgcheck=0
skip_if_unavailable=1
# External COPR repo
[copr-pbench]
name=Copr repo for pbench owned by ndokos
baseurl=https://copr-be.cloud.fedoraproject.org/results/ndokos/pbench/epel-7-x86_64/
skip_if_unavailable=True
gpgcheck=1
gpgkey=https://copr-be.cloud.fedoraproject.org/results/ndokos/pbench/pubkey.gpg
enabled=1
enabled_metadata=1
skip_if_unavailable=1
EOF
cat /etc/yum.repos.d/pbench.repo
sudo yum clean all
sudo yum install -y pbench-agent-internal
sudo yum install -y pbench-sysstat
sudo yum install -y pbench-uperf
sudo sed -i 's/disable_root: 1/disable_root: 0/g' /etc/cloud/cloud.cfg
cat /etc/cloud/cloud.cfg | grep disable_root

Binary file not shown.

Before

Width:  |  Height:  |  Size: 53 KiB

View File

@ -1,232 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from rally.task import scenario
from rally.plugins.openstack.scenarios.vm import utils as vm_utils
from rally.plugins.openstack.scenarios.neutron import utils as neutron_utils
from rally.task import types
from rally.task import validation
from rally.common import sshutils
import time
import StringIO
import csv
import json
import datetime
import logging
from Elastic import Elastic
LOG = logging.getLogger(__name__)
class BrowbeatPlugin(neutron_utils.NeutronScenario,
vm_utils.VMScenario,
scenario.Scenario):
@types.convert(image={"type": "glance_image"},
flavor={"type": "nova_flavor"})
@validation.required_openstack(users=True)
@scenario.configure(context={"cleanup": ["nova", "neutron", "cinder"],
"keypair": {}, "allow_ssh": {}})
def nova_create_pbench_uperf(
self,
image,
flavor,
zones,
user,
test_types,
protocols,
samples,
external,
test_name,
send_results=True,
num_pairs=1,
password="",
message_sizes=None,
instances=None,
elastic_host=None,
elastic_port=None,
cloudname=None,
**kwargs):
pbench_path = "/opt/pbench-agent"
pbench_results = "/var/lib/pbench-agent"
# Create env
router = self._create_router({}, external_gw=external)
network = self._create_network({})
subnet = self._create_subnet(network, {})
kwargs["nics"] = [{'net-id': network['network']['id']}]
self._add_interface_router(subnet['subnet'], router['router'])
# Launch pbench-jump-host
jh, jip = self._boot_server_with_fip(image,
flavor,
use_floating_ip=True,
floating_network=external['name'],
key_name=self.context["user"]["keypair"]["name"],
**kwargs)
servers = []
clients = []
# Launch Guests
if num_pairs is 1:
server = self._boot_server(
image,
flavor,
key_name=self.context["user"]["keypair"]["name"],
availability_zone=zones['server'],
**kwargs)
client = self._boot_server(
image,
flavor,
key_name=self.context["user"]["keypair"]["name"],
availability_zone=zones['client'],
**kwargs)
# IP Addresses
servers.append(
str(server.addresses[network['network']['name']][0]["addr"]))
clients.append(
str(client.addresses[network['network']['name']][0]["addr"]))
else:
for i in range(num_pairs):
server = self._boot_server(
image,
flavor,
key_name=self.context["user"]["keypair"]["name"],
availability_zone=zones['server'],
**kwargs)
client = self._boot_server(
image,
flavor,
key_name=self.context["user"]["keypair"]["name"],
availability_zone=zones['client'],
**kwargs)
# IP Addresses
servers.append(
str(server.addresses[network['network']['name']][0]["addr"]))
clients.append(
str(client.addresses[network['network']['name']][0]["addr"]))
# Wait for ping
self._wait_for_ping(jip['ip'])
# Open SSH Connection
jump_ssh = sshutils.SSH(user, jip['ip'], 22, self.context[
"user"]["keypair"]["private"], password)
# Check for connectivity
self._wait_for_ssh(jump_ssh)
# Write id_rsa to get to guests.
self._run_command_over_ssh(jump_ssh, {'remote_path': "rm -rf ~/.ssh"})
self._run_command_over_ssh(jump_ssh, {'remote_path': "mkdir ~/.ssh"})
jump_ssh.run(
"cat > ~/.ssh/id_rsa",
stdin=self.context["user"]["keypair"]["private"])
jump_ssh.execute("chmod 0600 ~/.ssh/id_rsa")
# Check status of guest
ready = False
retry = 10
while (not ready):
for sip in servers + clients:
cmd = "ssh -o StrictHostKeyChecking=no {}@{} /bin/true".format(
user, sip)
s1_exitcode, s1_stdout, s1_stderr = jump_ssh.execute(cmd)
if retry < 1:
LOG.error(
"Error : Issue reaching {} the guests through the Jump host".format(sip))
return 1
if s1_exitcode is 0:
LOG.info("Server: {} ready".format(sip))
ready = True
else:
LOG.info("Error reaching server: {} error {}".format(sip,s1_stderr))
retry = retry - 1
time.sleep(10)
# Register pbench across FIP
for sip in servers + clients:
cmd = "{}/util-scripts/pbench-register-tool-set --remote={}".format(
pbench_path, sip)
jump_ssh.execute(cmd)
# Quick single test
# debug = "--message-sizes=1024 --instances=1"
debug = ""
# Start uperf against private address
uperf = "{}/bench-scripts/pbench-uperf --clients={} --servers={} --samples={} {}".format(
pbench_path, ','.join(clients), ','.join(servers), samples, debug)
uperf += " --test-types={} --protocols={} --config={}".format(
test_types,
protocols,
test_name)
if message_sizes is not None :
uperf += " --message-sizes={}".format(message_sizes)
if instances is not None:
uperf += " --instances={}".format(instances)
# Execute pbench-uperf
# execute returns, exitcode,stdout,stderr
LOG.info("Starting Rally - PBench UPerf")
uperf_exitcode, stdout_uperf, stderr = jump_ssh.execute(uperf)
# Prepare results
cmd = "cat {}/uperf_{}*/result.csv".format(pbench_results, test_name)
exitcode, stdout, stderr = jump_ssh.execute(cmd)
if send_results :
if uperf_exitcode is not 1:
cmd = "cat {}/uperf_{}*/result.json".format(
pbench_results, test_name)
LOG.info("Running command : {}".format(cmd))
exitcode, stdout_json, stderr = jump_ssh.execute(cmd)
LOG.info("Result: {}".format(stderr))
es_ts = datetime.datetime.utcnow()
config = {
'elasticsearch': {
'host': elastic_host, 'port': elastic_port}, 'browbeat': {
'cloud_name': cloudname, 'timestamp': es_ts}}
elastic = Elastic(config, 'pbench')
json_result = StringIO.StringIO(stdout_json)
json_data = json.load(json_result)
for iteration in json_data:
elastic.index_result(iteration,test_name,'results/')
else:
LOG.error("Error with PBench Results")
# Parse results
result = StringIO.StringIO('\n'.join(stdout.split('\n')[1:]))
creader = csv.reader(result)
report = []
for row in creader:
if len(row) >= 1:
report.append(["aggregate.{}".format(row[1]), float(row[2])])
report.append(["single.{}".format(row[1]), float(row[3])])
if len(report) > 0:
self.add_output(
additive={"title": "PBench UPerf Stats",
"description": "PBench UPerf Scenario",
"chart_plugin": "StatsTable",
"axis_label": "Gbps",
"label": "Gbps",
"data": report})
cmd = "{}/util-scripts/pbench-move-results".format(pbench_path)
self._run_command_over_ssh(jump_ssh, {"remote_path": cmd})

View File

@ -1,57 +0,0 @@
{% set sla_max_avg_duration = sla_max_avg_duration or 60 %}
{% set sla_max_failure = sla_max_failure or 0 %}
{% set sla_max_seconds = sla_max_seconds or 60 %}
{% set times = times or 1 %}
{% set concurrency = concurrency or 1 %}
{% set num_pairs = num_pairs or 1 %}
---
BrowbeatPlugin.nova_create_pbench_uperf:
-
args:
image:
name: '{{image_name}}'
flavor:
name: '{{flavor_name}}'
zones:
server: '{{hypervisor_server}}'
client: '{{hypervisor_client}}'
external:
name: '{{external_network}}'
user: '{{user}}'
password: '{{password}}'
num_pairs: {{num_pairs}}
test_types: '{{test_types}}'
protocols: '{{protocols}}'
samples: '{{samples}}'
message_sizes: '{{message_sizes}}'
instances: '{{instances}}'
test_name: '{{test_name}}'
send_results: {{send_results}}
cloudname: '{{cloudname}}'
elastic_host: '{{elastic_host}}'
elastic_port: '{{elastic_port}}'
runner:
concurrency: 1
times: 1
type: "constant"
context:
users:
tenants: 1
users_per_tenant: 1
quotas:
neutron:
network: -1
port: -1
router: -1
subnet: -1
nova:
instances: -1
cores: -1
ram: -1
sla:
max_avg_duration: {{sla_max_avg_duration}}
max_seconds_per_iteration: {{sla_max_seconds}}
failure_rate:
max: {{sla_max_failure}}

View File

@ -16,13 +16,13 @@ from rally.plugins.openstack.scenarios.neutron import utils as neutron_utils
from rally.task import types
from rally.task import validation
from rally.common import sshutils
import browbeat.elastic
import time
import StringIO
import csv
import json
import datetime
import logging
from Elastic import Elastic
LOG = logging.getLogger(__name__)
@ -252,7 +252,7 @@ class BrowbeatPlugin(neutron_utils.NeutronScenario,
'cloud_name': cloudname,
'timestamp': es_ts,
'num_pairs': num_pairs}}
elastic = Elastic(config, 'pbench')
elastic = browbeat.elastic.Elastic(config, 'pbench')
json_result = StringIO.StringIO(stdout_json)
json_data = json.load(json_result)
for iteration in json_data:

View File

@ -7,8 +7,8 @@
{% set password = password or 'None' %}
{% set protocols = protocols or 'tcp' %}
{% set message_sizes = message_sizes or '64,1024,16384' %}
{% set hypervisor_server = hypervsior_server or 'None' %}
{% set hypervisor_client = hypervsior_client or 'None' %}
{% set hypervisor_server = hypervisor_server or 'None' %}
{% set hypervisor_client = hypervisor_client or 'None' %}
---
BrowbeatPlugin.pbench_uperf:
@ -60,4 +60,3 @@ BrowbeatPlugin.pbench_uperf:
max_seconds_per_iteration: {{sla_max_seconds}}
failure_rate:
max: {{sla_max_failure}}