Merge "Update Docs"

This commit is contained in:
Zuul 2022-03-11 09:02:07 +00:00 committed by Gerrit Code Review
commit 2a18b5a12e
5 changed files with 154 additions and 157 deletions

74
doc/source/charts.rst Normal file
View File

@ -0,0 +1,74 @@
======
Charts
======
To include any of the custom charts from Browbeat in a scenario, the following lines will have to be included in the python file of the program.
.. code-block:: python
import sys
import os
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), '../reports')))
from generate_scenario_duration_charts import ScenarioDurationChartsGenerator # noqa: E402
The custom charts will appear in the "Scenario Data" section of the Rally HTML report.
Chart - add_per_iteration_complete_data
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This plugin generates a stacked area graph for duration trend for each atomic action in an iteration.
To include this chart in any scenario, add the following lines at the end of the run() function of the scenario in the python file.
.. code-block:: python
self.duration_charts_generator = ScenarioDurationChartsGenerator()
self.duration_charts_generator.add_per_iteration_complete_data(self)
The graphs will appear under the "Per iteration" section of "Scenario Data" in the Rally HTML report.
The resulting graphs will look like the images below.
.. image:: images/Per_Iteration_Duration_Stacked_Area_Chart/Iteration1.png
:alt: Iteration 1 Chart
.. image:: images/Per_Iteration_Duration_Stacked_Area_Chart/Iteration2.png
:alt: Iteration 2 Chart
.. image:: images/Per_Iteration_Duration_Stacked_Area_Chart/Iteration3.png
:alt: Iteration 3 Chart
.. image:: images/Per_Iteration_Duration_Stacked_Area_Chart/Iteration4.png
:alt: Iteration 4 Chart
Chart - add_duplicate_atomic_actions_iteration_additive_data
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This plugin generates line graphs for atomic actions that have been executed more than once in the same iteration.
To include this chart in any scenario, add the following lines at the end of the run() function of the scenario in the python file.
.. code-block:: python
self.duration_charts_generator = ScenarioDurationChartsGenerator()
self.duration_charts_generator.add_duplicate_atomic_actions_iteration_additive_data(self)
The graphs will appear under the "Aggregated" section of "Scenario Data" in the Rally HTML report.
The resulting graphs will look like the images below.
.. image:: images/Duplicate_Atomic_Actions_Duration_Line_Chart.png
:alt: Duplicate Atomic Actions Duration Line Chart
Chart - add_all_resources_additive_data
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This plugin generates a line graph for duration data from each resource created by Rally.
To include this chart in any scenario, add the following lines at the end of the run() function of the scenario in the python file.
.. code-block:: python
self.duration_charts_generator = ScenarioDurationChartsGenerator()
self.duration_charts_generator.add_all_resources_additive_data(self)
The graphs will appear under the "Aggregated" section of "Scenario Data" in the Rally HTML report.
The resulting graphs will look like the images below.
.. image:: images/Resource_Atomic_Actions_Duration_Line_Chart.png
:alt: Resource Atomic Actions Duration Line Chart

View File

@ -15,7 +15,7 @@ Contents:
installation
usage
plugins
ci
charts
developing
contributing

View File

@ -42,31 +42,10 @@ On the Undercloud
[stack@undercloud ~]$ git clone https://github.com/openstack/browbeat.git
[stack@undercloud ~]$ source stackrc
[stack@undercloud ~]$ cd browbeat/ansible
[stack@undercloud ansible]$ ./generate_tripleo_inventory.sh -l
[stack@undercloud ansible]$ sudo easy_install pip
[stack@undercloud ansible]$ sudo pip install ansible
[stack@undercloud ansible]$ vi install/group_vars/all.yml # Make sure to edit the dns_server to the correct ip address
[stack@undercloud ansible]$ ./generate_tripleo_inventory.sh -l
[stack@undercloud ansible]$ ansible-playbook -i hosts.yml install/browbeat.yml
[stack@undercloud ansible]$ ansible-playbook -i hosts.yml install/shaker_build.yml
.. note:: Your default network might not work for you depending on your
underlay/overlay network setup. In such cases, user needs to create
appropriate networks for instances to allow them to reach the
internet. Some useful documentation can be found at:
https://access.redhat.com/documentation/en/red-hat-openstack-platform/11/single/networking-guide/
(Optional) Clone e2e-benchmarking repository and deploy benchmark-operator
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
e2e-benchmarking is a repository that is used to run workloads to stress an Openshift
cluster, and is needed to run shift-on-stack workloads in Browbeat.
To enable the e2e-benchmarking repository to be cloned and benchmark-operator to be deployed,
set install_e2e_benchmarking: true in ansible/install/group_vars/all.yml.
After that, add the kubeconfig paths of all your Openshift clusters in the ansible/kubeconfig_paths
file. Move the default kubeconfig file(/home/stack/.kube/config) to another location so that it isn't
used for all Openshift clusters. After that, run the command mentioned below.
[stack@undercloud ansible]$ ansible-playbook -i hosts.yml install/browbeat.yml
(Optional) Install Browbeat instance workloads
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -100,7 +79,7 @@ has been installed. To skip directly to this task execute:
::
$ ansible-playbook -i hosts install/browbeat.yml --start-at-task "Check browbeat_network"
$ ansible-playbook -i hosts.yml install/browbeat.yml --start-at-task "Check browbeat_network"
@ -111,28 +90,8 @@ has been installed. To skip directly to this task execute:
::
[stack@ospd ansible]$ ansible-playbook -i hosts install/collectd.yml
[stack@undercloud-0 ansible]$ ansible-playbook -i hosts.yml install/collectd.yml
(Optional) Install Rsyslogd logging with aggregation (not maintained)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
First configure the values rsyslog values and elasticsearch parameters in
`ansible/install/group_vars/all.yml`. If you have a large number of hosts
deploying an aggregator using `ansible/install/rsyslog-aggregator.yml`
is strongly suggested. If you have a small scale, change the value
rsyslog_forwarding in `all.yml` to `false`. Once things are configured
to your liking deploy logging on the cloud using the `rsyslog-logging.yml`
playbook.
Firewall configuration for the aggregator is left up to the user. The logging
install playbook will check that the aggregator is up and the port is open if
you deploy with aggregation.
::
[stack@ospd ansible]$ vim install/group_vars/all.yml
[stack@ospd ansible]$ ansible-playbook -i hosts install/rsyslog-aggregator.yml
[stack@ospd ansible]$ ansible-playbook -i hosts install/rsyslog-logging.yml
(Optional) Install Browbeat Grafana dashboards
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -144,16 +103,16 @@ http://docs.grafana.org/http_api/auth/#create-api-token
::
[stack@ospd ansible]$ # update the vars and make sure to update grafana_apikey with value
[stack@ospd ansible]$ vi install/group_vars/all.yml
[stack@ospd ansible]$ ansible-playbook -i hosts install/browbeat.yml # if not run before.
[stack@ospd ansible]$ ansible-playbook -i hosts install/grafana-dashboards.yml
[stack@undercloud ansible]$ # update the vars and make sure to update grafana_apikey with value
[stack@undercloud ansible]$ vi install/group_vars/all.yml
[stack@undercloud ansible]$ ansible-playbook -i hosts.yml install/browbeat.yml # if not run before.
[stack@undercloud ansible]$ ansible-playbook -i hosts.yml install/grafana-dashboards.yml
(Optional) Install Browbeat Prometheus/Grafana/Collectd
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
::
[stack@ospd ansible]$ ansible-playbook -i hosts install/grafana-prometheus-dashboards.yml
[stack@undercloud ansible]$ ansible-playbook -i hosts.yml install/grafana-prometheus-dashboards.yml
Make sure grafana-api-key is added in the `install/group_vars/all.yml`
@ -166,24 +125,31 @@ client side and Elasticsearch on the server side. Set the `cloud_prefix` and `es
::
[stack@ospd ansible]$ # update the vars
[stack@ospd ansible]$ vi install/group_vars/all.yml
[stack@ospd ansible]$ # install filebeat
[stack@ospd ansible]$ ansible-playbook -i hosts common_logging/install_logging.yml
[stack@ospd ansible]$ # install and start filebeat
[stack@ospd ansible]$ ansible-playbook -i hosts common_logging/install_logging.yml -e "start_filebeat=True"
[stack@undercloud ansible]$ # update the vars
[stack@undercloud ansible]$ vi install/group_vars/all.yml
[stack@undercloud ansible]$ # install filebeat
[stack@undercloud ansible]$ ansible-playbook -i hosts.yml common_logging/install_logging.yml
[stack@undercloud ansible]$ # install and start filebeat
[stack@undercloud ansible]$ ansible-playbook -i hosts.yml common_logging/install_logging.yml -e "start_filebeat=True"
Not mantained (Pre-Pike): Run Overcloud checks
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
(Optional) Install Kibana Visualizations
----------------------------------------
1. Update install/group_vars/all.yml (es_ip) to identify your ELK host.
2. Install Kibana Visualizations via Ansible playbook
::
[stack@ospd ansible]$ ansible-playbook -i hosts check/site.yml
[stack@undercloud ansible]# ansible-playbook -i hosts.yml install/kibana-visuals.yml
...
Your Overcloud check output is located in results/bug_report.log
Now navigate to http://elk-host-address to verify Kibana is
installed and custom visualizations are uploaded.
Install Browbeat from your local machine
----------------------------------------
Install Browbeat from your local machine (Not Manintained)
----------------------------------------------------------
This installs Browbeat onto your Undercloud but the playbook is run from your
local machine rather than directly on the Undercloud machine.
@ -651,17 +617,3 @@ entirely on the number of metrics and your environments capacity. There is a
Graphite dashboard included and it is recommended to install collectd on your
monitoring host such that you can see if you hit resource issues with your
monitoring host.
(Optional) Install Kibana Visualizations
----------------------------------------
1. Update install/group_vars/all.yml (es_ip) to identify your ELK host.
2. Install Kibana Visualizations via Ansible playbook
::
[root@dhcp23-93 ansible]# ansible-playbook -i hosts install/kibana-visuals.yml
...
Now navigate to http://elk-host-address to verify Kibana is
installed and custom visualizations are uploaded.

View File

@ -5,6 +5,50 @@ Plugins
Rally
~~~~~
Scenario - dynamic-workloads
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Dynamic workloads are workloads that aim to simulate a realistic Openstack customer environment, by introducing elements of randomness into the simulation. A list of the different workloads that are part of this Browbeat Rally Plugin is mentioned below.
VM:
- create_delete_servers: Create 'N' VMs(without floating IP), and delete 'M'
randomly chosen VMs from this list of VMs.
- migrate_servers: Create 'N' VMs(with floating IP), and migrate 'M' randomly
chosen VMs from this list of VMs across computes, before resizing them.
- swap_floating_ips_between_servers: Swap floating IPs between 2 servers. Ping
until failure after dissociating floating IPs, before swapping them. Ping until
success after swapping floating IPs between 2 servers.
Octavia:
- create_loadbalancers: Create 'N' loadbalancers with specified 'M' pools and 'K'
clients per Loadbalancer.
- delete_loadbalancers: Deletes 'M' loadbalancers randomly from 'N' loadbalancers
- delete_members_random_lb: Deletes 'M' members from a random loadbalancer
Trunk(pod simulation):
- pod_fip_simulation: Simulate pods with floating ips using subports on trunks and
VMs. Create 'N' trunks and VMs and 'M' subports per trunk/VM. Ping a random subport
of each trunk/VM from a jumphost.
- add_subports_to_random_trunks: Add 'M' subports to 'N' randomly chosen trunks. This
is to simulate pods being added to an existing VM.
- delete_subports_from_random_trunks: Delete 'M' subports from 'N' randomly chosen
trunks. This is is to simulate pods being destroyed.
- swap_floating_ips_between_random_subports: Swap floating IPs between 2 randomly
chosen subports from 2 trunks.
Provider network:
- provider_netcreate_nova_boot_ping: Creates a provider Network and Boots VM and ping
- provider_net_nova_boot_ping: Boots a VM and ping on random existing provider network
- provider_net_nova_delete: Delete all VM's and provider network
Shift on Stack:
shift_on_stack: Runs specified kube-burner workload through e2e-benchmarking. e2e-benchmarking is a [repository](https://github.com/cloud-bulldozer/e2e-benchmarking.git) that contains scripts to stress Openshift clusters. This workload uses e2e-benchmarking to test Openshift on Openstack.
Context - browbeat_delay
^^^^^^^^^^^^^^^^^^^^^^^^
@ -44,77 +88,3 @@ Scenario - nova_boot_persist_with_network_volume_fip
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This scenario creates instances with a nic, a volume and associates a floating ip that persist upon completion of a rally run. It is used as a workload with Telemetry by spawning many instances that have many metrics for the Telemetry subsystem to collect upon.
Charts
^^^^^^
To include any of the custom charts from Browbeat in a scenario, the following lines will have to be included in the python file of the program.
.. code-block:: python
import sys
import os
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), '../reports')))
from generate_scenario_duration_charts import ScenarioDurationChartsGenerator # noqa: E402
The customc charts will appear in the "Scenario Data" section of the Rally HTML report.
Chart - add_per_iteration_complete_data
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This plugin generates a stacked area graph for duration trend for each atomic action in an iteration.
To include this chart in any scenario, add the following lines at the end of the run() function of the scenario in the python file.
.. code-block:: python
self.duration_charts_generator = ScenarioDurationChartsGenerator()
self.duration_charts_generator.add_per_iteration_complete_data(self)
The graphs will appear under the "Per iteration" section of "Scenario Data" in the Rally HTML report.
The resulting graphs will look like the images below.
.. image:: images/Per_Iteration_Duration_Stacked_Area_Chart/Iteration1.png
:alt: Iteration 1 Chart
.. image:: images/Per_Iteration_Duration_Stacked_Area_Chart/Iteration2.png
:alt: Iteration 2 Chart
.. image:: images/Per_Iteration_Duration_Stacked_Area_Chart/Iteration3.png
:alt: Iteration 3 Chart
.. image:: images/Per_Iteration_Duration_Stacked_Area_Chart/Iteration4.png
:alt: Iteration 4 Chart
Chart - add_duplicate_atomic_actions_iteration_additive_data
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This plugin generates line graphs for atomic actions that have been executed more than once in the same iteration.
To include this chart in any scenario, add the following lines at the end of the run() function of the scenario in the python file.
.. code-block:: python
self.duration_charts_generator = ScenarioDurationChartsGenerator()
self.duration_charts_generator.add_duplicate_atomic_actions_iteration_additive_data(self)
The graphs will appear under the "Aggregated" section of "Scenario Data" in the Rally HTML report.
The resulting graphs will look like the images below.
.. image:: images/Duplicate_Atomic_Actions_Duration_Line_Chart.png
:alt: Duplicate Atomic Actions Duration Line Chart
Chart - add_all_resources_additive_data
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This plugin generates a line graph for duration data from each resource created by Rally.
To include this chart in any scenario, add the following lines at the end of the run() function of the scenario in the python file.
.. code-block:: python
self.duration_charts_generator = ScenarioDurationChartsGenerator()
self.duration_charts_generator.add_all_resources_additive_data(self)
The graphs will appear under the "Aggregated" section of "Scenario Data" in the Rally HTML report.
The resulting graphs will look like the images below.
.. image:: images/Resource_Atomic_Actions_Duration_Line_Chart.png
:alt: Resource Atomic Actions Duration Line Chart

View File

@ -4,15 +4,16 @@ Usage
Run Browbeat performance tests from Undercloud
----------------------------------------------
For Running the workloads from Undercloud
::
$ ssh undercloud-root
[root@ospd ~]# su - stack
[stack@ospd ~]$ cd browbeat/
[stack@ospd browbeat]$ . .browbeat-venv/bin/activate
(browbeat-venv)[stack@ospd browbeat]$ vi browbeat-config.yaml # Edit browbeat-config.yaml to control how many stress tests are run.
(browbeat-venv)[stack@ospd browbeat]$ ./browbeat.py <workload> #rally, shaker or "all"
[root@undercloud ~]# su - stack
[stack@undercloud ~]$ cd browbeat/
[stack@undercloud browbeat]$ . .browbeat-venv/bin/activate
(.browbeat-venv)[stack@undercloud browbeat]$ vi browbeat-config.yaml # Edit browbeat-config.yaml to control how many stress tests are run.
(.browbeat-venv)[stack@undercloud browbeat]$ ./browbeat.py <workload> #rally, shaker or "all"
Running Shaker
---------------
@ -77,7 +78,7 @@ you:
::
$ ansible-playbook -i hosts install/kibana-visuals.yml
$ ansible-playbook -i hosts.yml install/kibana-visuals.yml
Alternatively you can create your own visualizations of specific shaker runs
using some simple searches such as:
@ -94,7 +95,7 @@ If filebeat is enabled in the browbeat configuration file and filebeat was previ
::
$ ansible-playbook -i hosts common_logging/install_logging.yml
$ ansible-playbook -i hosts.yml common_logging/install_logging.yml
as explained in the installation documentation, then