Adding PerfKitBenchmarker to Browbeat.
+ Adding more benchmarks + Grafana snapshots + Benchmarks in list for ordering now. + Adding connmon Change-Id: I9fa4f5d31f9575ad7636218ae6091c8e11343410
This commit is contained in:
parent
912616634a
commit
c7a8420bf1
23
README.md
23
README.md
@ -18,6 +18,8 @@ Table of Contents
|
||||
* [(Optional) Install connmon:](#optional-install-connmon-1)
|
||||
* [Run performance checks](#run-performance-checks-1)
|
||||
* [Run performance stress tests through browbeat:](#run-performance-stress-tests-through-browbeat)
|
||||
* [Running PerfKitBenchmarker](#running-perfkitbenchmarker)
|
||||
* [Contributing](#contributing)
|
||||
|
||||
# Browbeat
|
||||
This started as a project to help determine the number of database connections a given OpenStack deployment uses via stress tests. It has since grown into a set of Ansible playbooks to help check deployments for known issues, install tools and change parameters of the overcloud.
|
||||
@ -55,8 +57,8 @@ $ cd browbeat/ansible
|
||||
$ ./gen_hostfile.sh <undercloud-ip> ~/.ssh/config
|
||||
$ vi install/group_vars/all # Make sure to edit the dns_server to the correct ip address
|
||||
$ ansible-playbook -i hosts install/browbeat.yml
|
||||
$ vi install/group_vars/all # Edit shaker subnet/start/end/gw settings
|
||||
$ ansible-playbook -i hosts install/shaker_network.yml
|
||||
$ vi install/group_vars/all # Edit browbeat subnet/start/end/gw settings
|
||||
$ ansible-playbook -i hosts install/browbeat_network.yml
|
||||
$ ansible-playbook -i hosts install/shaker_build.yml
|
||||
```
|
||||
|
||||
@ -100,8 +102,8 @@ $ ssh undercloud-root
|
||||
[stack@ospd ansible]$ sudo pip install ansible
|
||||
[stack@ospd ansible]$ vi install/group_vars/all # Make sure to edit the dns_server to the correct ip address
|
||||
[stack@ospd ansible]$ ansible-playbook -i hosts install/browbeat.yml
|
||||
[stack@ospd ansible]$ vi install/group_vars/all # Edit shaker subnet/start/end/gw settings
|
||||
[stack@ospd ansible]$ ansible-playbook -i hosts install/shaker_network.yml
|
||||
[stack@ospd ansible]$ vi install/group_vars/all # Edit browbeat public/private subnet/start/end/gw settings
|
||||
[stack@ospd ansible]$ ansible-playbook -i hosts install/browbeat_network.yml
|
||||
[stack@ospd ansible]$ ansible-playbook -i hosts install/shaker_build.yml
|
||||
```
|
||||
|
||||
@ -129,7 +131,18 @@ Your Overcloud check output is located in check/bug_report.log
|
||||
(browbeat-venv)[stack@ospd browbeat]$ ./browbeat.py -w
|
||||
```
|
||||
|
||||
## Contributing
|
||||
# Running PerfKitBenchmarker
|
||||
|
||||
Work is on-going to utilize PerfKitBenchmarker as a workload provider to browbeat. Many benchmarks work out of the box with browbeat. You must ensure that your network is setup correctly to run those benchmarks and you will need to configure the settings in ansible/install/group_vars/all for browbeat public/private networks. Currently tested benchmarks include: bonnie++, cluster_boot, copy_throughput(cp,dd,scp), fio, iperf, netperf, mesh_network, mongodb_ycsb, ping, and sysbench_oltp.
|
||||
|
||||
To run browbeat's PerfKit Benchmarks, you can start by viewing the tested benchmark's configuration in conf/browbeat-perfkit-complete.yaml. You must add them to your specific browbeat config yaml file or enable/disable the benchmarks you wish to run in the default config file (browbeat-config.yaml). There are many flags exposed in the configuration files to tune how those benchmarks run. Additional flags are exposed in the soruce code of PerfKitBenchmarker available: https://github.com/GoogleCloudPlatform/PerfKitBenchmarker
|
||||
|
||||
Example running only PerfKitBenchmarker benchmarks with browbeat from browbeat-config.yaml:
|
||||
```
|
||||
(browbeat-venv)[stack@ospd browbeat]$ ./browbeat.py -w perfkit -s browbeat-config.yaml
|
||||
```
|
||||
|
||||
# Contributing
|
||||
Contributions are most welcome! Pull requests need to be submitted using the gerrit code review system. Firstly, you need to login to GerritHub using your GitHub credentials and need to authorize GerritHub to access your account. Once you are logged in click you user name in the top-right corner, go to 'Settings' and under 'SSH Public Keys' you need to paste your public key. You can view your public key using:
|
||||
```
|
||||
$ cat ~/.ssh/id_\{r or d\}sa.pub
|
||||
|
11
ansible/install/browbeat_network.yml
Normal file
11
ansible/install/browbeat_network.yml
Normal file
@ -0,0 +1,11 @@
|
||||
---
|
||||
#
|
||||
# Playbook for browbeat-network
|
||||
#
|
||||
# Creates public and private network for use with Perfkit and Shaker
|
||||
#
|
||||
|
||||
- hosts: undercloud
|
||||
remote_user: "{{ local_remote_user }}"
|
||||
roles:
|
||||
- browbeat-network
|
@ -24,6 +24,9 @@ rally_venv: /home/stack/rally-venv
|
||||
# The default Shaker venv
|
||||
shaker_venv: /home/stack/shaker-venv
|
||||
|
||||
# The default PerfKit venv:
|
||||
perfkit_venv: /home/stack/perfkit-venv
|
||||
|
||||
# Guest images for the Over cloud
|
||||
centos_image_url: http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2
|
||||
cirros_image_url: http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
|
||||
@ -67,12 +70,19 @@ shaker_port: 5555
|
||||
# Should choose m1.small or larger
|
||||
shaker_flavor: m1.small
|
||||
|
||||
# Automates creating a public network that shaker can build its image off of
|
||||
shaker_public_subnet: 1.1.1.1/22
|
||||
shaker_pool_start: 1.1.1.1
|
||||
shaker_pool_end: 1.1.1.1
|
||||
shaker_pool_gw: 1.1.1.1
|
||||
# Automates creating a public network that perfkit and shaker utilize
|
||||
browbeat_pub_net_name: browbeat_public
|
||||
browbeat_pub_subnet: 1.1.1.1/22
|
||||
browbeat_pub_pool_start: 1.1.1.1
|
||||
browbeat_pub_pool_end: 1.1.1.1
|
||||
browbeat_pub_pool_gw: 1.1.1.1
|
||||
|
||||
# Defaults here should not require changing
|
||||
shaker_network_name: shaker_public
|
||||
shaker_router_name: shaker_router
|
||||
# Browbeat private subnet
|
||||
browbeat_pri_net_name: browbeat_private
|
||||
browbeat_pri_subnet: 172.16.10.0/24
|
||||
browbeat_pri_pool_start: 172.16.10.2
|
||||
browbeat_pri_pool_end: 172.16.10.100
|
||||
browbeat_pri_pool_gw: 172.16.10.1
|
||||
browbeat_pri_pool_dns: 8.8.8.8
|
||||
|
||||
browbeat_router_name: browbeat_router
|
||||
|
29
ansible/install/roles/browbeat-network/tasks/main.yml
Normal file
29
ansible/install/roles/browbeat-network/tasks/main.yml
Normal file
@ -0,0 +1,29 @@
|
||||
---
|
||||
#
|
||||
# Setup up network for browbeat
|
||||
#
|
||||
|
||||
- name: Create browbeat public network
|
||||
shell: ". {{overcloudrc}}; neutron net-create {{browbeat_pub_net_name}} --router:external | grep -E ' id ' | awk '{print $4}'"
|
||||
register: public_net_id
|
||||
|
||||
- name: Create browbeat public subnet
|
||||
shell: ". {{overcloudrc}}; neutron subnet-create {{public_net_id.stdout}} {{browbeat_pub_subnet}} --allocation-pool start={{browbeat_pub_pool_start}},end={{browbeat_pub_pool_end}} --gateway={{browbeat_pub_pool_gw}} --disable-dhcp"
|
||||
|
||||
- name: Create browbeat private network
|
||||
shell: ". {{overcloudrc}}; neutron net-create {{browbeat_pri_net_name}} | grep -E ' id ' | awk '{print $4}'"
|
||||
register: private_net_id
|
||||
|
||||
- name: Create browbeat private subnet
|
||||
shell: ". {{overcloudrc}}; neutron subnet-create {{private_net_id.stdout}} {{browbeat_pri_subnet}} --allocation-pool start={{browbeat_pri_pool_start}},end={{browbeat_pri_pool_end}} --gateway={{browbeat_pri_pool_gw}} --dns-nameserver {{browbeat_pri_pool_dns}} | grep -E ' id ' | awk '{print $4}'"
|
||||
register: private_subnet_id
|
||||
|
||||
- name: Create browbeat router
|
||||
shell: ". {{overcloudrc}}; neutron router-create {{browbeat_router_name}} | grep -E ' id ' | awk '{print $4}'"
|
||||
register: router_id
|
||||
|
||||
- name: Set browbeat router gateway
|
||||
shell: ". {{overcloudrc}}; neutron router-gateway-set {{router_id.stdout}} {{public_net_id.stdout}}"
|
||||
|
||||
- name: Add browbeat router interface to browbeat private network
|
||||
shell: ". {{overcloudrc}}; neutron router-interface-add {{router_id.stdout}} {{private_subnet_id.stdout}}"
|
@ -25,6 +25,9 @@
|
||||
- name: Create browbeat virtualenv
|
||||
command: virtualenv "{{ browbeat_venv }}" creates=""{{ browbeat_venv }}""
|
||||
|
||||
- name: Create perfkit virtualenv
|
||||
command: virtualenv "{{ perfkit_venv }}" creates=""{{ perfkit_venv }}""
|
||||
|
||||
- name: Create rally virtualenv
|
||||
command: virtualenv "{{ rally_venv }}" creates=""{{ rally_venv }}""
|
||||
|
||||
@ -47,6 +50,15 @@
|
||||
pip: requirements="{{ browbeat_path }}"/requirements.txt virtualenv=""{{ browbeat_venv }}""
|
||||
become: true
|
||||
|
||||
- name: Clone PerfKitBenchmarker on undercloud
|
||||
git: repo=https://github.com/GoogleCloudPlatform/PerfKitBenchmarker.git dest="{{ perfkit_venv }}"/PerfKitBenchmarker
|
||||
|
||||
- name: Install PerfKitBenchmarker requirements into perfkit-venv
|
||||
pip: requirements="{{ perfkit_venv }}"/PerfKitBenchmarker/requirements.txt virtualenv="{{ perfkit_venv }}"
|
||||
|
||||
- name: Install PerfKitBenchmarker Openstack requirements into perfkit-venv
|
||||
pip: requirements="{{ perfkit_venv }}"/PerfKitBenchmarker/requirements-openstack.txt virtualenv="{{ perfkit_venv }}"
|
||||
|
||||
- name: Install rally into rally-venv
|
||||
pip: name=rally virtualenv="{{ rally_venv }}"
|
||||
become: true
|
||||
|
@ -1,18 +0,0 @@
|
||||
---
|
||||
#
|
||||
# Setup up network for shaker
|
||||
#
|
||||
|
||||
- name: Create shaker public network
|
||||
shell: ". {{ overcloudrc }}; neutron net-create {{shaker_network_name}} --router:external | grep -E ' id ' | awk '{print $4}'"
|
||||
register: public_net_id
|
||||
|
||||
- name: Create shaker public subnet
|
||||
shell: ". {{ overcloudrc }}; neutron subnet-create {{public_net_id.stdout}} {{shaker_public_subnet}} --allocation-pool start={{shaker_pool_start}},end={{shaker_pool_end}} --gateway={{shaker_pool_gw}} --disable-dhcp | grep -E ' id ' | awk '{print $4}'"
|
||||
register: subnet_id
|
||||
|
||||
- name: Create shaker router
|
||||
shell: ". {{ overcloudrc }}; neutron router-create {{shaker_router_name}} | grep -E ' id ' | awk '{print $4}'"
|
||||
|
||||
- name: Set shaker router gateway
|
||||
shell: ". {{ overcloudrc }}; neutron router-gateway-set {{shaker_router_name}} {{shaker_network_name}}"
|
@ -1,9 +0,0 @@
|
||||
---
|
||||
#
|
||||
# Playbook for shaker-network
|
||||
#
|
||||
|
||||
- hosts: undercloud
|
||||
remote_user: "{{ local_remote_user }}"
|
||||
roles:
|
||||
- shaker-network
|
@ -28,6 +28,23 @@ grafana:
|
||||
enabled: true
|
||||
grafana_api_key: (Your API Key Here)
|
||||
snapshot_compute: false
|
||||
perfkit:
|
||||
enabled: true
|
||||
sleep_before: 0
|
||||
sleep_after: 0
|
||||
venv: /home/stack/perfkit-venv/bin/activate
|
||||
default:
|
||||
image: centos7
|
||||
machine_type: m1.small
|
||||
os_type: rhel
|
||||
openstack_image_username: centos
|
||||
openstack_public_network: browbeat_public
|
||||
openstack_private_network: browbeat_private
|
||||
benchmarks:
|
||||
- name: fio-centos-m1-small
|
||||
enabled: false
|
||||
benchmarks: fio
|
||||
data_disk_size: 4
|
||||
rally:
|
||||
enabled: true
|
||||
sleep_before: 5
|
||||
|
@ -28,6 +28,23 @@ grafana:
|
||||
enabled: true
|
||||
grafana_api_key: (Your API Key Here)
|
||||
snapshot_compute: false
|
||||
perfkit:
|
||||
enabled: true
|
||||
sleep_before: 0
|
||||
sleep_after: 0
|
||||
venv: /home/stack/perfkit-venv/bin/activate
|
||||
default:
|
||||
image: centos7
|
||||
machine_type: m1.small
|
||||
os_type: rhel
|
||||
openstack_image_username: centos
|
||||
openstack_public_network: browbeat_public
|
||||
openstack_private_network: browbeat_private
|
||||
benchmarks:
|
||||
- name: fio-centos-m1-small
|
||||
enabled: false
|
||||
benchmarks: fio
|
||||
data_disk_size: 4
|
||||
shaker:
|
||||
enabled: true
|
||||
server: (Address of machine running browbeat)
|
||||
|
@ -4,6 +4,7 @@ import yaml
|
||||
import logging
|
||||
import sys
|
||||
sys.path.append('lib/')
|
||||
from PerfKit import PerfKit
|
||||
from Tools import *
|
||||
from Rally import *
|
||||
from Shaker import *
|
||||
@ -33,7 +34,7 @@ except ImportError :
|
||||
|
||||
# Browbeat specific options
|
||||
_install_opts = ['connmon', 'browbeat', 'shaker-build']
|
||||
_workload_opts = ['rally', 'shaker']
|
||||
_workload_opts = ['perfkit', 'rally', 'shaker']
|
||||
_config_file = 'browbeat-config.yaml'
|
||||
_config = None
|
||||
|
||||
@ -59,7 +60,10 @@ def _run_playbook(path, hosts, only_tag=None, skip_tag=None):
|
||||
return play.run()
|
||||
|
||||
def _run_workload_provider(provider, ansible_hosts=None):
|
||||
if provider == "rally":
|
||||
if provider == "perfkit":
|
||||
perfkit = PerfKit(_config)
|
||||
perfkit.start_workloads()
|
||||
elif provider == "rally":
|
||||
rally = Rally(_config, ansible_hosts)
|
||||
rally.start_workloads()
|
||||
elif provider == "shaker":
|
||||
|
98
conf/browbeat-perfkit-complete.yaml
Normal file
98
conf/browbeat-perfkit-complete.yaml
Normal file
@ -0,0 +1,98 @@
|
||||
# Complete set of PerfKit Benchmarks run from Browbeat
|
||||
browbeat:
|
||||
results : results/
|
||||
sudo: true
|
||||
connmon: false
|
||||
rerun: 3
|
||||
ansible:
|
||||
hosts: ansible/hosts
|
||||
install:
|
||||
connmon: ansible/install/connmon.yml
|
||||
browbeat: ansible/install/browbeat.yml
|
||||
check: ansible/check/site.yml
|
||||
adjust:
|
||||
keystone_token: ansible/browbeat/adjustment-keystone-token.yml
|
||||
neutron_l3: ansible/browbeat/adjustment-l3.yml
|
||||
nova_db: ansible/browbeat/adjustment-db.yml
|
||||
workers: ansible/browbeat/adjustment-workers.yml
|
||||
grafana_snapshot: ansible/browbeat/snapshot-general-performance-dashboard.yml
|
||||
shaker_build: ansible/install/shaker_build.yml
|
||||
grafana:
|
||||
enabled: false
|
||||
cloud_name: openstack
|
||||
grafana_ip: 1.1.1.1
|
||||
grafana_port: 3000
|
||||
dashboards:
|
||||
- openstack-general-system-performance
|
||||
snapshot:
|
||||
enabled: false
|
||||
grafana_api_key: (Your API Key Here)
|
||||
snapshot_compute: false
|
||||
perfkit:
|
||||
enabled: true
|
||||
sleep_before: 0
|
||||
sleep_after: 0
|
||||
venv: /home/stack/perfkit-venv/bin/activate
|
||||
default:
|
||||
image: centos7
|
||||
machine_type: m1.small
|
||||
os_type: rhel
|
||||
openstack_image_username: centos
|
||||
openstack_public_network: browbeat_public
|
||||
openstack_private_network: browbeat_private
|
||||
benchmarks:
|
||||
- name: bonnie-centos-m1-small
|
||||
enabled: false
|
||||
benchmarks: bonnie++
|
||||
data_disk_size: 20
|
||||
image: centos7
|
||||
machine_type: m1.small
|
||||
os_type: rhel
|
||||
openstack_image_username: centos
|
||||
openstack_public_network: browbeat_public
|
||||
openstack_private_network: browbeat_private
|
||||
- name: cluster_boot-centos-m1-small
|
||||
enabled: false
|
||||
benchmarks: cluster_boot
|
||||
config_override: "cluster_boot.vm_groups.default.vm_count=4"
|
||||
- name: copy_throughput-cp-centos-m1-small
|
||||
enabled: false
|
||||
benchmarks: copy_throughput
|
||||
copy_benchmark_mode: cp
|
||||
data_disk_size: 20
|
||||
- name: copy_throughput-dd-centos-m1-small
|
||||
enabled: false
|
||||
benchmarks: copy_throughput
|
||||
copy_benchmark_mode: dd
|
||||
data_disk_size: 20
|
||||
- name: copy_throughput-scp-centos-m1-small
|
||||
enabled: false
|
||||
benchmarks: copy_throughput
|
||||
copy_benchmark_mode: scp
|
||||
data_disk_size: 20
|
||||
- name: fio-centos-m1-small
|
||||
enabled: false
|
||||
benchmarks: fio
|
||||
data_disk_size: 4
|
||||
- name: iperf-centos-m1-small
|
||||
enabled: false
|
||||
benchmarks: iperf
|
||||
- name: mesh_network-centos-m1-small
|
||||
enabled: false
|
||||
benchmarks: mesh_network
|
||||
num_vms: 3
|
||||
# selinux affects this benchmark:
|
||||
- name: mongodb_ycsb-centos-m1-small
|
||||
enabled: false
|
||||
benchmarks: mongodb_ycsb
|
||||
data_disk_size: 20
|
||||
num_striped_disks: 1
|
||||
ycsb_client_vms: 1
|
||||
mongodb_writeconcern: acknowledged
|
||||
- name: ping-centos-m1-small
|
||||
enabled: false
|
||||
benchmarks: ping
|
||||
- name: sysbench_oltp-centos-m1-small
|
||||
enabled: false
|
||||
benchmarks: sysbench_oltp
|
||||
data_disk_size: 20
|
145
lib/PerfKit.py
Normal file
145
lib/PerfKit.py
Normal file
@ -0,0 +1,145 @@
|
||||
from Connmon import Connmon
|
||||
from Tools import Tools
|
||||
import glob
|
||||
import logging
|
||||
import datetime
|
||||
import os
|
||||
import shutil
|
||||
import subprocess
|
||||
import time
|
||||
|
||||
|
||||
class PerfKit:
|
||||
def __init__(self, config):
|
||||
self.logger = logging.getLogger('browbeat.PerfKit')
|
||||
self.config = config
|
||||
self.error_count = 0
|
||||
self.tools = Tools(self.config)
|
||||
self.connmon = Connmon(self.config)
|
||||
self.test_count = 0
|
||||
self.scenario_count = 0
|
||||
|
||||
def _log_details(self):
|
||||
self.logger.info("Current number of scenarios executed: {}".format(self.scenario_count))
|
||||
self.logger.info("Current number of test(s) executed: {}".format(self.test_count))
|
||||
self.logger.info("Current number of test failures: {}".format(self.error_count))
|
||||
|
||||
def run_benchmark(self, benchmark_config, result_dir, test_name, cloud_type="OpenStack"):
|
||||
self.logger.debug("--------------------------------")
|
||||
self.logger.debug("Benchmark_config: {}".format(benchmark_config))
|
||||
self.logger.debug("result_dir: {}".format(result_dir))
|
||||
self.logger.debug("test_name: {}".format(test_name))
|
||||
self.logger.debug("--------------------------------")
|
||||
|
||||
# Build command to run
|
||||
if 'enabled' in benchmark_config:
|
||||
del benchmark_config['enabled']
|
||||
cmd = ("source /home/stack/overcloudrc; source {0}; "
|
||||
"/home/stack/perfkit-venv/PerfKitBenchmarker/pkb.py "
|
||||
"--cloud={1} --run_uri=browbeat".format(self.config['perfkit']['venv'], cloud_type))
|
||||
# Add default parameters as necessary
|
||||
for default_item, value in self.config['perfkit']['default'].iteritems():
|
||||
if default_item not in benchmark_config:
|
||||
benchmark_config[default_item] = value
|
||||
for parameter, value in benchmark_config.iteritems():
|
||||
if not parameter == 'name':
|
||||
self.logger.debug("Parameter: {}, Value: {}".format(parameter, value))
|
||||
cmd += " --{}={}".format(parameter, value)
|
||||
|
||||
# Remove any old results
|
||||
if os.path.exists("/tmp/perfkitbenchmarker/run_browbeat"):
|
||||
shutil.rmtree("/tmp/perfkitbenchmarker/run_browbeat")
|
||||
|
||||
if self.config['browbeat']['connmon']:
|
||||
self.connmon.start_connmon()
|
||||
|
||||
# Run PerfKit
|
||||
from_ts = int(time.time() * 1000)
|
||||
if 'sleep_before' in self.config['perfkit']:
|
||||
time.sleep(self.config['perfkit']['sleep_before'])
|
||||
self.logger.info("Running Perfkit Command: {}".format(cmd))
|
||||
stdout_file = open("{}/pkb.stdout.log".format(result_dir), 'w')
|
||||
stderr_file = open("{}/pkb.stderr.log".format(result_dir), 'w')
|
||||
process = subprocess.Popen(cmd, shell=True, stdout=stdout_file, stderr=stderr_file)
|
||||
process.communicate()
|
||||
if 'sleep_after' in self.config['perfkit']:
|
||||
time.sleep(self.config['perfkit']['sleep_after'])
|
||||
to_ts = int(time.time() * 1000)
|
||||
|
||||
# Stop connmon at end of perfkit task
|
||||
if self.config['browbeat']['connmon']:
|
||||
self.connmon.stop_connmon()
|
||||
try:
|
||||
self.connmon.move_connmon_results(result_dir, test_name)
|
||||
self.connmon.connmon_graphs(result_dir, test_name)
|
||||
except:
|
||||
self.logger.error("Connmon Result data missing, Connmon never started")
|
||||
|
||||
# Determine success
|
||||
try:
|
||||
with open("{}/pkb.stderr.log".format(result_dir), 'r') as stderr:
|
||||
if any('SUCCEEDED' in line for line in stderr):
|
||||
self.logger.info("Benchmark completed.")
|
||||
else:
|
||||
self.logger.error("Benchmark failed.")
|
||||
self.error_count += 1
|
||||
except IOError:
|
||||
self.logger.error("File missing: {}/pkb.stderr.log".format(result_dir))
|
||||
|
||||
# Copy all results
|
||||
for perfkit_file in glob.glob("/tmp/perfkitbenchmarker/run_browbeat/*"):
|
||||
shutil.move(perfkit_file, result_dir)
|
||||
if os.path.exists("/tmp/perfkitbenchmarker/run_browbeat"):
|
||||
shutil.rmtree("/tmp/perfkitbenchmarker/run_browbeat")
|
||||
|
||||
# Grafana integration
|
||||
if 'grafana' in self.config and self.config['grafana']['enabled']:
|
||||
grafana_ip = self.config['grafana']['grafana_ip']
|
||||
grafana_port = self.config['grafana']['grafana_port']
|
||||
url = 'http://{}:{}/dashboard/db/'.format(grafana_ip, grafana_port)
|
||||
cloud_name = self.config['grafana']['cloud_name']
|
||||
for dashboard in self.config['grafana']['dashboards']:
|
||||
full_url = '{}{}?from={}&to={}&var-Cloud={}'.format(url, dashboard, from_ts, to_ts,
|
||||
cloud_name)
|
||||
self.logger.info('{} - Grafana URL: {}'.format(test_name, full_url))
|
||||
if self.config['grafana']['snapshot']['enabled']:
|
||||
hosts_file = self.config['ansible']['hosts']
|
||||
playbook = self.config['ansible']['grafana_snapshot']
|
||||
extra_vars = 'grafana_ip={} '.format(grafana_ip)
|
||||
extra_vars += 'grafana_port={} '.format(grafana_port)
|
||||
extra_vars += 'grafana_api_key={} '.format(self.config['grafana']['snapshot']['grafana_api_key'])
|
||||
extra_vars += 'from={} '.format(from_ts)
|
||||
extra_vars += 'to={} '.format(to_ts)
|
||||
extra_vars += 'results_dir={}/{} '.format(result_dir, test_name)
|
||||
extra_vars += 'var_cloud={} '.format(cloud_name)
|
||||
if self.config['grafana']['snapshot']['snapshot_compute']:
|
||||
extra_vars += 'snapshot_compute=true '
|
||||
snapshot_cmd = 'ansible-playbook -i {} {} -e "{}"'.format(hosts_file, playbook,
|
||||
extra_vars)
|
||||
subprocess_cmd = ['ansible-playbook', '-i', hosts_file, playbook, '-e',
|
||||
'{}'.format(extra_vars)]
|
||||
self.logger.info('Snapshot command: {}'.format(snapshot_cmd))
|
||||
snapshot_log = open('{}/snapshot.log'.format(result_dir), 'a+')
|
||||
subprocess.Popen(subprocess_cmd, stdout=snapshot_log, stderr=subprocess.STDOUT)
|
||||
|
||||
def start_workloads(self):
|
||||
self.logger.info("Starting PerfKitBenchmarker Workloads.")
|
||||
time_stamp = datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
|
||||
self.logger.debug("Time Stamp (Prefix): {}".format(time_stamp))
|
||||
benchmarks = self.config.get('perfkit')['benchmarks']
|
||||
if len(benchmarks) > 0:
|
||||
for benchmark in benchmarks:
|
||||
if benchmark['enabled']:
|
||||
self.logger.info("Benchmark: {}".format(benchmark['name']))
|
||||
self.scenario_count += 1
|
||||
for run in range(self.config['browbeat']['rerun']):
|
||||
self.test_count += 1
|
||||
result_dir = self.tools.create_results_dir(
|
||||
self.config['browbeat']['results'], time_stamp, benchmark['name'], run)
|
||||
test_name = "{}-{}-{}".format(time_stamp, benchmark['name'], run)
|
||||
self.run_benchmark(benchmark, result_dir, test_name)
|
||||
self._log_details()
|
||||
else:
|
||||
self.logger.info("Skipping {} benchmark, enabled: false".format(benchmark['name']))
|
||||
else:
|
||||
self.logger.error("Config file contains no perfkit benchmarks.")
|
Loading…
Reference in New Issue
Block a user