update doc

Change-Id: I26fa4b8a5c6fc44807c9c3807cbeabcce1c83b41
This commit is contained in:
ahothan 2015-09-02 14:34:29 -07:00
parent 988a99e7a1
commit 525010a2d5
5 changed files with 252 additions and 115 deletions

View File

@ -1,9 +1,6 @@
===============================
KloudBuster
===============================
.. note:: (Documentation is under construction)
===============================
How good is your OpenStack data plane under real heavy load?
@ -14,12 +11,13 @@ Features
* Neutron configuration agnostic (any encapsulation, any overlay, any plugin)
* Can load the data plane with one OpenStack cloud (single-cloud operations for L3 East-West scale) or 2 OpenStack clouds (dual-cloud operations with one cloud hosting the HTTP servers and the other loading HTTP traffic for L3 North-South scale testing)
* User can specify any number of tenants, users, routers, networks (only limited by cloud capacity) and KloudBuster will stage all these resources in a way that makes sense for operational data plane traffic
* User can specify any number of tenants, routers, networks (only limited by cloud capacity) and KloudBuster will stage all these resources in a way that makes sense for operational data plane traffic
* HTTP traffic load:
* real HTTP servers (Nginx) running in real Linux images (Ubuntu14.04)
* can specify any number of HTTP servers per tenant
* high performance and highly scalable HTTP traffic generators to simulate huge number of HTTP users and TCP connections (hundreds of thousands to millions)
* overall throughput and latency measurement for every single HTTP request (typically millions per run) using the highly accurate HdrHistogram library
* overall throughput and latency measurement for every single HTTP request (typically millions per run) using the open source HdrHistogram library
* Traffic shaping to specify on which links traffic should flow
* Highly efficient and scalable metric aggregation
* Can support periodic reporting and aggregation of results
* Automatic cleanup upon termination (by default)
@ -31,6 +29,11 @@ Features
* KloudBuster VM images built using OpenStack DIB (Disk Image Builder)
* Verified to work on any OpenStack release starting from IceHouse
Limitations
-----------
* requires Neutron networking (does not support Nova networking)
* only supports HTTP traffic in this version
Contributions and Feedbacks
---------------------------
@ -50,18 +53,15 @@ openstack-dev@lists.openstack.org with a '[kloudbuster]' tag in the subject.
Links
-----
* Source: http://git.openstack.org/cgit/openstack/kloudbuster
* Bugs: http://bugs.launchpad.net/kloudbuster
* Source: `<http://git.openstack.org/cgit/openstack/kloudbuster>`_
* Bugs: `<http://bugs.launchpad.net/kloudbuster>`_
* Documentation: `<http://kloudbuster.readthedocs.org>`_
Licensing
---------
KloudBuster is licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
.. code::
http://www.apache.org/licenses/LICENSE-2.0
You may obtain a copy of the License at `<http://www.apache.org/licenses/LICENSE-2.0>`_
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,

View File

@ -2,14 +2,153 @@
Installation
============
KloudBuster is available in the Python Package Index (PyPI) and can be installed using pip.
Binary Installation
-------------------
This is the recommended way to install KloudBuster for non-development use.
KloudBuster is available in the Python Package Index (PyPI)::
`KloudBuster PyPI <https://pypi.python.org/pypi/KloudBuster>`_
You will need to have python 2.7 and pip installed before installing KloudBuster.
At the command line::
$ pip install kloudbuster
Or, if you have virtualenvwrapper installed::
Or, if you have `virtualenv <https://pypi.python.org/pypi/virtualenv>`_ installed::
$ virtualenv vkb
$ source vkb/bin/activate
$ pip install kloudbuster
Or, if you have `virtualenvwrapper <https://virtualenvwrapper.readthedocs.org>`_ installed::
$ mkvirtualenv kloudbuster
$ pip install kloudbuster
To verify kloudbuster is installed, just type
.. code::
kloudbuster --help
Source Installation
-------------------
For code development, clone the kloudbuster git repository::
git clone https://github.com/openstack/kloudbuster.git
Then install dependencies (after optionally creating and activating a virtual env)::
cd kloudbuster
pip install -r requirements.txt
pip install -r test-requirements.txt
To verify kloudbuster is installed, just type
.. code::
python kloudbuster/kloudbuster.py --help
VM Image Upload
---------------
KloudBuster needs one "universal" test VM image (referred to as "KloudBuster image") that contains the necessary test software. The KloudBuster image is then instantiated in potentially large number of VMs by the KloudBuster application using the appropriate role (HTTP server, HTTP traffic generator...).
Pre-built images are available for download from the `OpenStack App Catalog <http://apps.openstack.org>`_ (preferred method) or can be built from MacOSX using Vagrant or from any Linux server.
If your OpenStack Glance can access the Internet, you can skip the following section and **you are done with the installation**.
Manual Upload of the KloudBuster VM image
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If Glance does not have access to http://storage.apps.openstack.org on the Internet, the KloudBuster VM image must be downloaded from the OpenStack App Catalog to an intermediate location then uploaded to Glance using either a Glance CLI command or using the Horizon dashboard.
The KloudBuster VM image can be downloaded from `<http://apps.openstack.org/#tab=glance-images>`_ and look for an image named with the "kloudbuster_v" prefix and download the one that has the latest version.
KloudBuster VM images are qcow2 images named "kloudbuster_v<version>.qcow2" (e.g. "kloudbuster_v3.qcow2").
The name of the image in Glance must match exactly the image name in the App Catalog (without the .qcow2 extension), for example to upload the image from a local copy of that image:
.. code::
glance image-create --file kloudbuster_v3.qcow2 --disk-format qcow2 --container-format bare --name kloudbuster_v3
Rebuild the Image
^^^^^^^^^^^^^^^^^
Only if using the pre-built version does not work (for whatever reason).
MacOSX with Vagrant
~~~~~~~~~~~~~~~~~~~
You need to install first:
* `Virtualbox <https://cisco.jiveon.com/external-link.jspa?url=https://www.virtualbox.org/wiki/Downloads>`_
* `Vagrant <https://cisco.jiveon.com/external-link.jspa?url=https://www.vagrantup.com/downloads.html>`_
.. code::
# clone the kloudbuster repository if you have not done so
git clone https://github.com/openstack/kloudbuster.git
# go to the dib directory
cd kloudbuster/kloudbuster/dib
# run vagrant and start building the image
vagrant up
After a few minutes (depending on virtualbox overhead), the qcow2 image will be built and available in the same directory. You can then upload it to OpenStack using the glance CLI, destroy the vagrant VM ("vagrant destroy") and dispose of the kloudbuster directory (if no longer needed).
Buid on Linux
~~~~~~~~~~~~~
A generally faster build method than with MacOSX/Vagrant.
Your Linux server must have python, git and qemu utilities installed.
Ubuntu/Debian::
$ sudo apt-get install python-dev git qemu-utils
Redhat/Fedora/CentOS::
sudo yum install python-devel git qemu-img
Furthermore, the python PyYAML package must be installed (use "pip install PyYAML" in your virtual environment if you have one).
Then build the image:
.. code::
# clone the kloudbuster repository
git clone https://github.com/openstack/kloudbuster.git
# go to the dib directory
cd kloudbuster/kloudbuster/dib
# run the build image script, will install DIB and start the build
sh build-image.sh
After a few minutes, the qcow2 image will be built and available in the same directory. You can then upload it to OpenStack using the glance CLI),
If you get an error message saying that import yaml fails (seems to happen only on Ubuntu):
.. code::
dib-run-parts Thu Jul 2 09:27:50 PDT 2015 Running /tmp/image.ewtpa5DW/hooks/extra-data.d/99-squash-package-install
"/tmp/image.ewtpa5DW/hooks/extra-data.d/../bin/package-installs-squash",
line 26, in <module>
import yaml
ImportError: No module named yaml
You need to comment out the secure_path option in your /etc/sudoers file (use "sudo visudo" to edit that file):
.. code::
#Defaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"

View File

@ -2,99 +2,54 @@
Usage
========
To run KloudBuster you need to:
Quick Start Guide
-----------------
* install KloudBuster (see instructions in the Installation section)
* upload a KloudBuster test VM image to the OpenStack image store (Glance)
This guide will allow you to run KloudBuster on your OpenStack cloud using the default scale settings which is generally safe to run on any cloud, small or large (it should also work on an all-in-one devstack cloud installation).
Minimal pre-requisites
^^^^^^^^^^^^^^^^^^^^^^
There are 3 ways to launch KloudBuster:
* install KloudBuster (see instructions in the Installation section)
* admin access to the cloud under test
* download an admin rc file from the cloud under test using Horizon dashboard
* 3 available floating IPs
* run a scale session exclusively from the command line interface
* run as a background server controlled by the KloudBuster Web User Interface
* run as a background server controlled by an external tool or application using a RESTful interface
Download an admin rc file::
Login to the Horizon dashboard of the cloud under test as admin, then go to the Projects, Access and Security, API Access.
Click on the "Download OpenStack RC File" button and note down where that file is downloaded by your browser.
Build the KoudBuster test VM image
----------------------------------
The default scale settings can be displayed from the command line using the --show-config option.
By default KloudBuster will run on a single cloud and create:
KloudBuster only needs one "universal" test VM image (referred to as "KloudBuster image") that contains the necessary test software. The KloudBuster image is then launched in the cloud by the KloudBuster application using the appropriate role (HTTP server, HTTP traffic generator...).
* 1 tenant, 1 user, 2 routers, 1 network, 1 VM running as an HTTP server
* 1 VM running the Redis server (for orchestration)
* 1 VM running the HTTP traffic generator with 1000 connections and a total of 500 requests per second for 30 seconds
Build on MacOSX with Vagrant
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Once done, all resources will be cleaned up and results will be displayed.
In this minimal test, VMs are placed by Nova using its own scheduling logic. In more advanced usages, traffic can be shaped precisely to fill the appropriate network links by using specific configuration settings.
You need to install first:
Start KloudBuster
^^^^^^^^^^^^^^^^^
* `Virtualbox <https://cisco.jiveon.com/external-link.jspa?url=https://www.virtualbox.org/wiki/Downloads>`_
* `Vagrant <https://cisco.jiveon.com/external-link.jspa?url=https://www.vagrantup.com/downloads.html>`_
To list all command line options, pass --help.
Run kloudbuster with the following options:
.. code::
kloudbuster --tested-rc <path of the admin rc file> --tested-passwd <admin password>
# clone the kloudbuster repository if you have not done so
git clone https://github.com/openstack/kloudbuster.git
# go to the dib directory
cd kloudbuster/kloudbuster/dib
# run vagrant and start building the image
vagrant up
The run should take around a minute (depending on how fast is the cloud to create resources) and you should see the actions taken by KloudBuster displayed on the console, followed by the scale results.
After a few minutes, the qcow2 image will be built and available in the same directory. You can then copy it in a safe location (or perhaps upload it to OpenStack using glance CLI), destroy the vagrant VM ("vagrant destroy") and dispose of the kloudbuster directory (if no longer needed).
Build on Linux
^^^^^^^^^^^^^^
Your Linux server must have python, git and qemu utilities installed.
Ubuntu/Debian::
$ sudo apt-get install python-dev git qemu-utils
Redhat/Fedora/CentOS::
sudo yum install python-devel git qemu-img
Furthermore, the python PyYAML package must be installed (use "pip install PyYAML" in your virtual environment if you have one).
Then build the image:
.. code::
# clone the kloudbuster repository
git clone https://github.com/openstack/kloudbuster.git
# go to the dib directory
cd kloudbuster/kloudbuster/dib
# run the build image script, will install DIB and start the build
sh build-image.sh
After a few minutes, the qcow2 image will be built and available in the same directory. You can then copy it in a safe location (or perhaps upload it to OpenStack using glance CLI),
If you get an error message saying that import yaml fails (seems to happen only on Ubuntu):
.. code::
dib-run-parts Thu Jul 2 09:27:50 PDT 2015 Running /tmp/image.ewtpa5DW/hooks/extra-data.d/99-squash-package-install
"/tmp/image.ewtpa5DW/hooks/extra-data.d/../bin/package-installs-squash",
line 26, in <module>
import yaml
ImportError: No module named yaml
You need to comment out the secure_path option in your /etc/sudoers file (use "sudo visudo" to edit that file):
.. code::
#Defaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
Command Line Interface Options
------------------------------
Once this minimal scale test passes you can tackle more elaborate scale testing by increasing the scale numbers or providing various traffic shaping options.
Configuration File
------------------
KloudBuster Web User Interface
------------------------------
The default configuration can be displayed on the command line console using the --show-config option.
It is easy to have a custom configuration by redirecting the output to a custom file, modifying that
file and passing it to the KlousBuster command line using the --config option.
Note that the default configuration is always loaded by KloudBuster and any default option can be overriden by providing a custom configuration file that only contains modified options.
RESTful Interface
-----------------

View File

@ -1,29 +1,43 @@
# KloudBuster Default configuration file
# This file can be copied, used as a template to specify different settings, then
# passed to KloudBuster using the '--config-file <path> option.
# In the copy, properties that are unchanged (use same as default) can be simply removed
# from the file.
# Config options common to client and server side
# Name of the image to use for all test VMs (client, server and proxy)
# The image name must exist in OpenStack and must be built with the appropriate
# packages.
# The default test VM image is named "kloudbuster_<version>" where
# The default test VM image is named "kloudbuster_v<version>" where
# <version> is the KloudBuster test VM image version (e.g. "kloudbuster_v3")
# Leave empty to use the default test VM image (recommended).
# If non empty use quotes if there are space characters in the name (e.g. 'my image')
image_name:
# Config options common to client and server side
# Keystone admin role name (default should work in most deployments)
keystone_admin_role: "admin"
# Cleanup all kloudbuster resources upon exit
# If set to False, resources created will not be deleted on exit and the user
# will have to execute the cleanup script later to remove all these resources
cleanup_resources: True
# VM creation concurrency
# Specifies how many VMs will be created at a time. Larger numbers can be used
# but will not necessarily shorten the overall staging time (this will largely
# depend on the scalability of the OpenStack control plane).
# Well tuned control planes with multiple instances of Nova have shown to support
# a concurrency level of up to around 50
vm_creation_concurrency: 5
#
# Public key to use to access all test VMs
# ssh access to the test VMs launched by kloudbuster is not required
# but can be handy if the user wants to ssh manually to any of them (for example
# to debug)
# public key to use to access all test VMs
# if empty will default to the user's public key (~/.ssh/id_rsa.pub) if it
#
# If empty will default to the user's public key (~/.ssh/id_rsa.pub) if it
# exists, otherwise will not provision any public key.
# If configured or available, a key pair will be added for each
# configured user.
@ -45,7 +59,7 @@ server:
number_tenants: 1
# Number of Users to be created inside the tenant
# For now support only 1 user per tenant
# By default, create only 1 user per tenant
users_per_tenant: 1
# Number of routers to be created within the context of each User
@ -53,6 +67,7 @@ server:
# Number of networks to be created within the context of each Router
# Assumes 1 subnet per network
# Note that you will need as many available floating IPs as routers
networks_per_router: 1
# Number of VM instances to be created within the context of each Network
@ -61,22 +76,25 @@ server:
# Number of security groups per network
secgroups_per_network: 1
# Assign floating IP for every VM
use_floatingip: True
# Assign floating IP for every server side test VM
# Default: no floating IP (only assign internal fixed IP)
use_floatingip: False
# Placement hint
# Traffic shaping - VM Placement hint
# Availability zone to use for servers in the server cloud
# Leave empty if you prefer to have the Nova scheduler place the server VMs
# If you want to pick a particular AZ, put that AZ name (e.g. nova)
# If you want a paticular compute host, put the AZ and compute host names s
# eparated by ':' (e.g. nova:tme100)
# If you want a paticular compute host, put the AZ and compute host names
# separated by ':' (e.g. nova:tme100)
# Note that this is ignored/overriden if you specify a topology file (-t)
availability_zone:
# CLIENT SIDE CONFIG OPTIONS
client:
# Assign floating IP for every VM
use_floatingip: True
# Assign floating IP for every client side test VM
# Default: no floating IP (only assign internal fixed IP)
use_floatingip: False
# Flavor to use for the test images
flavor:
@ -87,38 +105,49 @@ client:
# Size of local disk in GB
disk: 20
# Placement hint
# Traffic shaping - VM Placement hint
# Availability zone to use for clients in the client cloud
# Leave empty if you prefer to have the Nova scheduler place the server VMs
# If you want to pick a particular AZ, put that AZ name (e.g. nova)
# If you want a paticular compute host, put the AZ and compute host names s
# eparated by ':' (e.g. nova:tme100)
# If you want a paticular compute host, put the AZ and compute host names
# separated by ':' (e.g. nova:tme100)
# Note that this is ignored/overriden if you specify a topology file (-t)
availability_zone:
# Interval for polling status from all VMs
# Interval for polling status from all VMs in seconds
polling_interval: 5
# Tooling
tp_tool:
name: 'nuttcp'
dest_path: '/usr/bin/nuttcp'
http_tool:
name: 'wrk'
dest_path: '/usr/local/bin/wrk2'
# HTTP tool specific configs (per VM)
# Every HTTP server VM is paired to 1 HTTP traffic generator VM
# KloudBuster will take care of setting up the proper static routes
# to allow connectivity between all pairs of VMs.
# For example, if 1000 HTTP servers are configured, KloudBuster will
# instantiate 1000 HTTP traffic generators and match them 1:1, for a total
# of 2000 VM instances.
http_tool_configs:
# Threads to run tests
threads: 1
# Connections to be kept concurrently per VM
# This number also corresponds to the number of HTTP users
# connected at any time to the matching HTTP server
connections: 1000
# Rate limit in RPS per client (0 for unlimit)
rate_limit: 500
# Timeout for HTTP requests
timeout: 5
# Connection Type: "Keep-alive", "New"
# keep-alive: the TCP connection is reused across requests
# new: create a new TCP connection for every request (and close it after receiving the reply)
connection_type: 'Keep-alive'
# Interval for periodical report
# Interval for periodical report in seconds
# Use 0 if you only need 1 final aggregated report for the entire run duration
# Otherwise will provide results at every interval (results are reset at the start of each period and
# are not cumulative across periods)
report_interval: 5
# Duration of testing tools (seconds)
duration: 30

View File

@ -1,11 +1,25 @@
# Compute host topology file for running KloudBuster
# Usage: Pass to KloudBuster using -t
# Compute host topology file template for running KloudBuster
# Usage: Pass to KloudBuster using -t <path>
#
# A typical use of this topology file is to shape traffic in order to maximize
# inter-rack L3 traffic.
# With 2 racks, simply place each rack node name in each group.
# With more than 2 racks, separate the racks in 2 groups.
#
# When used, the topology file will override any availability zone settings in the
# confguration file.
#
# The compute host name must be exactly the same as shown from NOVA:
# i.e. "nova hypervisor-list"
# Grouping for placing all the server side VMs
# Do not change the group name, you can add as many hosts as needed
servers_rack:
hh23-5
# Grouping for placing all the client side VMs
# Do not change the group name, you can add as many hosts as needed
clients_rack:
hh23-6