Adding support to upload images from URL

Change-Id: I1e5feea10431b634135e4818d7fcd1a1b2445713
This commit is contained in:
Yichen Wang 2015-02-11 09:59:24 -08:00
parent 10df5b919a
commit 8b73f62747
4 changed files with 280 additions and 180 deletions

View File

@ -4,20 +4,24 @@ VMTP
What is VMTP What is VMTP
============ ============
VMTP is a data path performance tool for OpenStack clouds. VMTP is a data path performance tool for OpenStack clouds.
Features Features
-------- --------
VMTP is a python application that will automatically perform ping connectivity, ping round trip time measurement (latency) and TCP/UDP throughput measurement for the following flows on any OpenStack deployment: VMTP is a python application that will automatically perform ping connectivity, ping round trip time measurement (latency) and TCP/UDP throughput measurement for the following flows on any OpenStack deployment:
* VM to VM same network (private fixed IP) * VM to VM same network (private fixed IP)
* VM to VM different network same tenant (intra-tenant L3 fixed IP) * VM to VM different network same tenant (intra-tenant L3 fixed IP)
* VM to VM different network and tenant (floating IP inter-tenant L3) * VM to VM different network and tenant (floating IP inter-tenant L3)
Optionally, when an external Linux host is available: Optionally, when an external Linux host is available:
* External host/VM download and upload throughput/latency (L3/floating IP) * External host/VM download and upload throughput/latency (L3/floating IP)
Optionally, when SSH login to any Linux host (native or virtual) is available: Optionally, when SSH login to any Linux host (native or virtual) is available:
* Host to host throughput (intra-node and inter-node) * Host to host throughput (intra-node and inter-node)
Optionally, VMTP can extract automatically CPU usage from all native hosts in the cloud during the throughput tests, provided the Ganglia monitoring service (gmond) is installed and enabled on those hosts. Optionally, VMTP can extract automatically CPU usage from all native hosts in the cloud during the throughput tests, provided the Ganglia monitoring service (gmond) is installed and enabled on those hosts.
@ -26,53 +30,64 @@ For VM-related flows, VMTP will automatically create the necessary OpenStack res
In the case involving pre-existing native or virtual hosts, VMTP will SSH to the targeted hosts to perform measurements. In the case involving pre-existing native or virtual hosts, VMTP will SSH to the targeted hosts to perform measurements.
Pre-requisite Pre-requisite to run VMTP
------------- -------------------------
* For VM related performance measurements: For VM related performance measurements
* Access to the cloud Horizon Dashboard ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
* 1 working external network pre-configured on the cloud (VMTP will pick the first one found)
* At least 2 floating IP if an external router is configured or 3 floating IP if there is no external router configured
* 1 Linux image available in OpenStack (any distribution)
* A configuration file that is properly set for the cloud to test (see "Configuration File" section below)
* For native/external host throughput: * Access to the cloud Horizon Dashboard
* A public key must be installed on the target hosts (see ssh password-less access below) * 1 working external network pre-configured on the cloud (VMTP will pick the first one found)
* At least 2 floating IP if an external router is configured or 3 floating IP if there is no external router configured
* For pre-existing native host throughputs: * 1 Linux image available in OpenStack (any distribution)
* Firewalls must be configured to allow TCP/UDP ports 5001 and TCP port 5002 * A configuration file that is properly set for the cloud to test (see "Configuration File" section below)
For native/external host throughputs
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
* A public key must be installed on the target hosts (see ssh password-less access below)
For pre-existing native host throughputs
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
* Firewalls must be configured to allow TCP/UDP ports 5001 and TCP port 5002
For running VMTP Docker Image
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Docker is installed. See `here <https://docs.docker.com/installation/#installation/>`_ for instructions.
* Docker is installed if using the VMTP Docker image
Sample Results Output Sample Results Output
--------------------- ---------------------
VMTP will display the results to stdout with the following data: VMTP will display the results to stdout with the following data:
* Session general information (date, auth_url, OpenStack encaps, VMTP version...) .. code::
* List of results per flow, for each flow:
* flow name - Session general information (date, auth_url, OpenStack encaps, VMTP version...)
* to and from IP addresses - List of results per flow, for each flow:
* to and from availability zones (if VM) | flow name
* results: | to and from IP addresses
* TCP | to and from availability zones (if VM)
* packet size | - results:
* throughput value | | -TCP
* number of retransmissions | | | packet size
* round trip time in ms | | | throughput value
* CPU usage (if enabled), for each host in the openstack cluster: | | | number of retransmissions
* baseline (before test starts) | | | round trip time in ms
* 1 or more readings during test | | | - CPU usage (if enabled), for each host in the openstack cluster
* UDP | | | | baseline (before test starts)
* for each packet size | | | | 1 or more readings during test
* throughput value | | -UDP
* loss rate | | | - for each packet size
* CPU usage (if enabled) | | | | throughput value
* ICMP | | | | loss rate
* average, min, max and stddev round trip time in ms | | | | CPU usage (if enabled)
| | - ICMP
| | | average, min, max and stddev round trip time in ms
Detailed results can also be stored in a file in JSON format using the --json command line argument. Detailed results can also be stored in a file in JSON format using the *--json* command line argument.
Installation Installation
@ -82,8 +97,10 @@ For people who wants to do development for VMTP, it is recommended to set up the
Here is an example for Ubuntu developers, and similar packages can be found and installed on RPM-based distro as well. Here is an example for Ubuntu developers, and similar packages can be found and installed on RPM-based distro as well.
.. code-block:: none .. code::
$ sudo apt-get install python-dev python-virtualenv git git-review libxml2-dev libxslt-dev libffi-dev libz-dev libyaml-dev libssl-dev
$ sudo apt-get install python-dev python-virtualenv git git-review
$ sudo apt-get install libxml2-dev libxslt-dev libffi-dev libz-dev libyaml-dev libssl-dev
$ virtualenv vmtpenv $ virtualenv vmtpenv
$ source vmtpenv/bin/activate $ source vmtpenv/bin/activate
$ git clone git://git.openstack.org/stackforge/vmtp $ git clone git://git.openstack.org/stackforge/vmtp
@ -103,12 +120,14 @@ In its Docker image form, VMTP is located under the /vmtp directory in the conta
To run VMTP directly from the host shell (may require "sudo" up front if not root) To run VMTP directly from the host shell (may require "sudo" up front if not root)
.. code-block:: none .. code::
docker run <vmtp-docker-image-name> python /vmtp/vmtp.py <args> docker run <vmtp-docker-image-name> python /vmtp/vmtp.py <args>
To run VMTP from the Docker image shell: To run VMTP from the Docker image shell:
.. code-block:: none .. code::
docker run <vmtp-docker-image-name> /bin/bash docker run <vmtp-docker-image-name> /bin/bash
cd /vmtp.py cd /vmtp.py
python vmtp.py <args> python vmtp.py <args>
@ -127,85 +146,90 @@ For example, one can decide to mount the current host directory as /vmtp/shared
To get a copy of the VMTP default configuration file from the container: To get a copy of the VMTP default configuration file from the container:
.. code-block:: none .. code::
docker run -v $PWD:/vmtp/shared:rw <docker-vmtp-image-name> cp /vmtp/cfg.default.yaml /vmtp/shared/mycfg.yaml docker run -v $PWD:/vmtp/shared:rw <docker-vmtp-image-name> cp /vmtp/cfg.default.yaml /vmtp/shared/mycfg.yaml
Assume you have edited the configuration file "mycfg.yaml" and retrieved an openrc file "admin-openrc.sh" from Horizon on the local directory and would like to get results back in the "res.json" file, you can export the current directory ($PWD), map it to /vmtp/shared in the container in read/write mode, then run the script in the container by using files from the shared directory: Assume you have edited the configuration file "mycfg.yaml" and retrieved an openrc file "admin-openrc.sh" from Horizon on the local directory and would like to get results back in the "res.json" file, you can export the current directory ($PWD), map it to /vmtp/shared in the container in read/write mode, then run the script in the container by using files from the shared directory:
.. code-block:: none .. code::
docker run -v $PWD:/vmtp/shared:rw -t <docker-vmtp-image-name> python /vmtp/vmtp.py -c shared/mycfg.yaml -r shared/admin-openrc.sh -p admin --json shared/res.json docker run -v $PWD:/vmtp/shared:rw -t <docker-vmtp-image-name> python /vmtp/vmtp.py -c shared/mycfg.yaml -r shared/admin-openrc.sh -p admin --json shared/res.json
cat res.json cat res.json
Print VMTP Usage Print VMTP Usage
---------------- ----------------
```
usage: vmtp.py [-h] [-c <config_file>] [-r <openrc_file>]
[-m <gmond_ip>[:<port>]] [-p <password>] [-t <time>]
[--host <user>@<host_ssh_ip>[:<server-listen-if-name>]]
[--external-host <user>@<ext_host_ssh_ip>]
[--access_info {host:<hostip>, user:<user>, password:<pass>}]
[--mongod_server <server ip>] [--json <file>]
[--tp-tool nuttcp|iperf] [--hypervisor name]
[--inter-node-only] [--protocols T|U|I]
[--bandwidth <bandwidth>] [--tcpbuf <tcp_pkt_size1,...>]
[--udpbuf <udp_pkt_size1,...>] [--no-env] [-d] [-v]
[--stop-on-error]
OpenStack VM Throughput V2.0.0 .. code::
optional arguments: usage: vmtp.py [-h] [-c <config_file>] [-r <openrc_file>]
-h, --help show this help message and exit [-m <gmond_ip>[:<port>]] [-p <password>] [-t <time>]
-c <config_file>, --config <config_file> [--host <user>@<host_ssh_ip>[:<server-listen-if-name>]]
override default values with a config file [--external-host <user>@<ext_host_ssh_ip>]
-r <openrc_file>, --rc <openrc_file> [--access_info {host:<hostip>, user:<user>, password:<pass>}]
source OpenStack credentials from rc file [--mongod_server <server ip>] [--json <file>]
-m <gmond_ip>[:<port>], --monitor <gmond_ip>[:<port>] [--tp-tool nuttcp|iperf] [--hypervisor name]
Enable CPU monitoring (requires Ganglia) [--inter-node-only] [--protocols T|U|I]
-p <password>, --password <password> [--bandwidth <bandwidth>] [--tcpbuf <tcp_pkt_size1,...>]
OpenStack password [--udpbuf <udp_pkt_size1,...>] [--no-env] [-d] [-v]
-t <time>, --time <time> [--stop-on-error] [--vm_image_url <url_to_image>]
throughput test duration in seconds (default 10 sec)
--host <user>@<host_ssh_ip>[:<server-listen-if-name>]
native host throughput (targets requires ssh key)
--external-host <user>@<ext_host_ssh_ip>
external-VM throughput (target requires ssh key)
--access_info {host:<hostip>, user:<user>, password:<pass>}
access info for control host
--mongod_server <server ip>
provide mongoDB server IP to store results
--json <file> store results in json format file
--tp-tool nuttcp|iperf
transport perf tool to use (default=nuttcp)
--hypervisor name hypervisor to use in the avail zone (1 per arg, up to
2 args)
--inter-node-only only measure inter-node
--protocols T|U|I protocols T(TCP), U(UDP), I(ICMP) - default=TUI (all)
--bandwidth <bandwidth>
the bandwidth limit for TCP/UDP flows in K/M/Gbps,
e.g. 128K/32M/5G. (default=no limit)
--tcpbuf <tcp_pkt_size1,...>
list of buffer length when transmitting over TCP in
Bytes, e.g. --tcpbuf 8192,65536. (default=65536)
--udpbuf <udp_pkt_size1,...>
list of buffer length when transmitting over UDP in
Bytes, e.g. --udpbuf 128,2048. (default=128,1024,8192)
--no-env do not read env variables
-d, --debug debug flag (very verbose)
-v, --version print version of this script and exit
--stop-on-error Stop and keep everything as-is on error (must cleanup
manually)
``` OpenStack VM Throughput V2.0.0
optional arguments:
-h, --help show this help message and exit
-c <config_file>, --config <config_file>
override default values with a config file
-r <openrc_file>, --rc <openrc_file>
source OpenStack credentials from rc file
-m <gmond_ip>[:<port>], --monitor <gmond_ip>[:<port>]
Enable CPU monitoring (requires Ganglia)
-p <password>, --password <password>
OpenStack password
-t <time>, --time <time>
throughput test duration in seconds (default 10 sec)
--host <user>@<host_ssh_ip>[:<server-listen-if-name>]
native host throughput (targets requires ssh key)
--external-host <user>@<ext_host_ssh_ip>
external-VM throughput (target requires ssh key)
--access_info {host:<hostip>, user:<user>, password:<pass>}
access info for control host
--mongod_server <server ip>
provide mongoDB server IP to store results
--json <file> store results in json format file
--tp-tool nuttcp|iperf
transport perf tool to use (default=nuttcp)
--hypervisor name hypervisor to use in the avail zone (1 per arg, up to
2 args)
--inter-node-only only measure inter-node
--protocols T|U|I protocols T(TCP), U(UDP), I(ICMP) - default=TUI (all)
--bandwidth <bandwidth>
the bandwidth limit for TCP/UDP flows in K/M/Gbps,
e.g. 128K/32M/5G. (default=no limit)
--tcpbuf <tcp_pkt_size1,...>
list of buffer length when transmitting over TCP in
Bytes, e.g. --tcpbuf 8192,65536. (default=65536)
--udpbuf <udp_pkt_size1,...>
list of buffer length when transmitting over UDP in
Bytes, e.g. --udpbuf 128,2048. (default=128,1024,8192)
--no-env do not read env variables
-d, --debug debug flag (very verbose)
-v, --version print version of this script and exit
--stop-on-error Stop and keep everything as-is on error (must cleanup
manually)
--vm_image_url <url_to_image>
URL to a Linux image in qcow2 format that can be
downloaded from
Configuration File Configuration File
^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^
VMTP configuration files follow the yaml syntax and contain variables used by VMTP to run and collect performance data. VMTP configuration files follow the yaml syntax and contain variables used by VMTP to run and collect performance data.
The default configuration is stored in the cfg.default.yaml file. The default configuration is stored in the cfg.default.yaml file.
Default values should be overwritten for any cloud under test by defining new variable values in a new configuration file that follows the same format.
Variables that are not defined in the new configuration file will retain their default values. Default values should be overwritten for any cloud under test by defining new variable values in a new configuration file that follows the same format. Variables that are not defined in the new configuration file will retain their default values.
Parameters that you are most certainly required to change are: Parameters that you are most certainly required to change are:
@ -216,24 +240,25 @@ Parameters that you are most certainly required to change are:
Check the content of cfg.default.yaml file as it contains the list of configuration variables and instructions on how to set them. Check the content of cfg.default.yaml file as it contains the list of configuration variables and instructions on how to set them.
Create one configuration file for your specific cloud and use the -c option to pass that file name to VMTP. Create one configuration file for your specific cloud and use the *-c* option to pass that file name to VMTP.
**Note:** the configuration file is not needed if the VMTP only runs the native host throughput option (--host) **Note:** the configuration file is not needed if the VMTP only runs the native host throughput option (*--host*)
OpenStack openrc file OpenStack openrc file
^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^
VMTP requires downloading an "openrc" file from the OpenStack Dashboard (Project|Acces&Security!Api Access|Download OpenStack RC File) VMTP requires downloading an "openrc" file from the OpenStack Dashboard (Project|Acces&Security!Api Access|Download OpenStack RC File)
This file should then be passed to VMTP using the -r option or should be sourced prior to invoking VMTP.
**Note:** the openrc file is not needed if VMTP only runs the native host throughput option (--host) This file should then be passed to VMTP using the *-r* option or should be sourced prior to invoking VMTP.
**Note:** the openrc file is not needed if VMTP only runs the native host throughput option (*--host*)
Bandwidth limit for TCP/UDP flow measurements Bandwidth Limit for TCP/UDP Flow Measurements
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Specify a value in --bandwidth will limit the bandwidth when performing throughput tests. Specify a value in *--bandwidth* will limit the bandwidth when performing throughput tests.
The default behavior for both TCP/UDP are unlimited. For TCP, we are leveraging on the protocol itself to get the best performance; while for UDP, we are doing a binary search to find the optimal bandwidth. The default behavior for both TCP/UDP are unlimited. For TCP, we are leveraging on the protocol itself to get the best performance; while for UDP, we are doing a binary search to find the optimal bandwidth.
@ -243,16 +268,25 @@ This is useful when running vmtp on production clouds. The test tool will use up
Host Selection in Availability Zone Host Selection in Availability Zone
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The --hypervisor argument can be used to specify explicitly where to run the test VM in the configured availability zone. The *--hypervisor* argument can be used to specify explicitly where to run the test VM in the configured availability zone.
This can be handy for example when exact VM placement can impact the data path performance (for example rack based placement when the availability zone spans across multiple racks). This can be handy for example when exact VM placement can impact the data path performance (for example rack based placement when the availability zone spans across multiple racks).
The first --hypervisor argument specifies on which host to run the test server VM. The second --hypervisor argument (in the command line) specifies on which host to run the test client VMs. The first *--hypervisor* argument specifies on which host to run the test server VM. The second *--hypervisor* argument (in the command line) specifies on which host to run the test client VMs.
The value of the argument must match the hypervisor host name as known by OpenStack (or as displayed using "nova hypervisor-list") The value of the argument must match the hypervisor host name as known by OpenStack (or as displayed using "nova hypervisor-list")
Example of usage is given below. Example of usage is given below.
Upload Images to Glance
^^^^^^^^^^^^^^^^^^^^^^^
VMTP requires a Linux image available in Glance to spawn VMs. It could be uploaded manually through Horizon or CLI, or VMTP will try to upload the image defined in the configuration file automatically.
There is a candidate image defined in the default config already. It has been verified working, but of course it is OK to try other Linux distro as well.
**NOTE:** Due to the limitation of the Python glanceclient API (v2.0), it is not able to create the image directly from a remote URL. So the implementation of this feature used a glance CLI command instead. Be sure to source the OpenStack rc file first before running VMTP with this feature.
Examples of running VMTP on an OpenStack Cloud Examples of running VMTP on an OpenStack Cloud
---------------------------------------------- ----------------------------------------------
@ -262,12 +296,22 @@ Preparation
Download the openrc file from OpenStack Dashboard, and saved it to your local file system. (In Horizon dashboard: Project|Acces&Security!Api Access|Download OpenStack RC File) Download the openrc file from OpenStack Dashboard, and saved it to your local file system. (In Horizon dashboard: Project|Acces&Security!Api Access|Download OpenStack RC File)
Upload the Linux image to the OpenStack controller node, so that OpenStack is able to spawning VMs. You will be prompted an error if the Ubuntu image is not available to use when running the tool. The image can be uploaded using either Horizon dashboard, or the command below:
.. code::
python vmtp.py -r admin-openrc.sh -p admin --vm_image_url http://<url_to_the_image>
**Note:** Currently, VMTP only supports the Linux image in qcow2 format.
If executing a VMTP Docker image "docker run" (or "sudo docker run") must be placed in front of these commands unless you run a shell script directly from inside the container. If executing a VMTP Docker image "docker run" (or "sudo docker run") must be placed in front of these commands unless you run a shell script directly from inside the container.
*Example 1: Typical Run* Example 1: Typical Run
""""""""""""""""""""""
Run VMTP on an OpenStack cloud with the default configuration file, use "admin-openrc.sh" as the rc file, and "admin" as the password. Run VMTP on an OpenStack cloud with the default configuration file, use "admin-openrc.sh" as the rc file, and "admin" as the password.
.. code-block:: none .. code::
python vmtp.py -r admin-openrc.sh -p admin python vmtp.py -r admin-openrc.sh -p admin
This will generate 6 standard sets of performance data: This will generate 6 standard sets of performance data:
@ -278,43 +322,56 @@ This will generate 6 standard sets of performance data:
(5) VM to VM different network (inter-node, L3 fixed IP) (5) VM to VM different network (inter-node, L3 fixed IP)
(6) VM to VM different network and tenant (inter-node, floating IP) (6) VM to VM different network and tenant (inter-node, floating IP)
By default, the performance data of all three protocols (TCP/UDP/ICMP) will be measured for each scenario mentioned above. However, it can be overridden by providing --protocols. E.g. By default, the performance data of all three protocols (TCP/UDP/ICMP) will be measured for each scenario mentioned above. However, it can be overridden by providing *--protocols*. E.g.
.. code::
.. code-block:: none
python vmtp.py -r admin-openrc.sh -p admin --protocols IT python vmtp.py -r admin-openrc.sh -p admin --protocols IT
This will tell VMTP to only collect ICMP and TCP measurements. This will tell VMTP to only collect ICMP and TCP measurements.
*Example 2: Cloud upload/download performance measurement* Example 2: Cloud upload/download performance measurement
""""""""""""""""""""""""""""""""""""""""""""""""""""""""
Run VMTP on an OpenStack cloud with a specified configuration file (mycfg.yaml), and saved the result to a JSON file: Run VMTP on an OpenStack cloud with a specified configuration file (mycfg.yaml), and saved the result to a JSON file:
.. code-block:: none .. code::
python vmtp.py -c mycfg.yaml -r admin-openrc.sh -p admin --external_host localadmin@172.29.87.29 --json res.json python vmtp.py -c mycfg.yaml -r admin-openrc.sh -p admin --external_host localadmin@172.29.87.29 --json res.json
This run will generate 8 sets of performance data, the standard 6 sets mentioned above, plus two sets of upload/download performance data for both TCP and UDP. This run will generate 8 sets of performance data, the standard 6 sets mentioned above, plus two sets of upload/download performance data for both TCP and UDP.
**Note:** In order to perform the upload/download performance test, an external server must be specified and configured with SSH password-less access. See below for more info. **Note:** In order to perform the upload/download performance test, an external server must be specified and configured with SSH password-less access. See below for more info.
*Example 3: Specify which availability zone to spawn VMs* Example 3: Specify which availability zone to spawn VMs
Run VMTP on an OpenStack cloud, spawn the test server VM on tme212, and the test client VM on tme210. Do the inter-node measurement only. """""""""""""""""""""""""""""""""""""""""""""""""""""""
.. code-block:: none Run VMTP on an OpenStack cloud, spawn the test server VM on tme212, and the test client VM on tme210. Save the result, and perform the inter-node measurement only.
python vmtp.py -r admin-openrc.sh -p lab --inter-node-only --json vxlan_offload.json --hypervisor tme212 --hypervisor tme210
.. code::
python vmtp.py -r admin-openrc.sh -p lab --inter-node-only --json res.json --hypervisor tme212 --hypervisor tme210
Example 4: Collect native host performance data
"""""""""""""""""""""""""""""""""""""""""""""""
*Example 4: Collect native host performance data*
Run VMTP to get native host throughput between 172.29.87.29 and 172.29.87.30 using the localadmin ssh username and run each tcp/udp test session for 120 seconds (instead of the default 10 seconds): Run VMTP to get native host throughput between 172.29.87.29 and 172.29.87.30 using the localadmin ssh username and run each tcp/udp test session for 120 seconds (instead of the default 10 seconds):
.. code-block:: none .. code::
python vmtp.py --host localadmin@172.29.87.29 --host localadmin@172.29.87.30 --time 120 python vmtp.py --host localadmin@172.29.87.29 --host localadmin@172.29.87.30 --time 120
**Note:** This command requires each host to have the VMTP public key (ssh/id_rsa.pub) inserted into the ssh/authorized_keys file in the username home directory, i.e. SSH password-less access. See below for more info. **Note:** This command requires each host to have the VMTP public key (ssh/id_rsa.pub) inserted into the ssh/authorized_keys file in the username home directory, i.e. SSH password-less access. See below for more info.
*Example 5: Measurement on pre-existing VMs* Example 5: Measurement on pre-existing VMs
""""""""""""""""""""""""""""""""""""""""""
It is possible to run VMTP between pre-existing VMs that are accessible through SSH (using floating IP). It is possible to run VMTP between pre-existing VMs that are accessible through SSH (using floating IP).
The first IP passed (--host) is always the one running the server side. Optionally a server side listening interface name can be passed if clients should connect using a particular server IP. For example, to measure throughput between 2 hosts using the network attached to the server interface "eth5": The first IP passed (*--host*) is always the one running the server side. Optionally a server side listening interface name can be passed if clients should connect using a particular server IP. For example, to measure throughput between 2 hosts using the network attached to the server interface "eth5":
.. code::
.. code-block:: none
python vmtp.py --host localadmin@172.29.87.29:eth5 --host localadmin@172.29.87.30 python vmtp.py --host localadmin@172.29.87.29:eth5 --host localadmin@172.29.87.30
**Note:** Prior to running, the VMTP public key must be installed on each VM. **Note:** Prior to running, the VMTP public key must be installed on each VM.
@ -329,10 +386,10 @@ Public clouds are special because they may not expose all OpenStack APIs and may
Refer to the provided public cloud sample configuration files for more information. Refer to the provided public cloud sample configuration files for more information.
SSH password-less Access SSH Password-less Access
------------------------ ------------------------
For host throughput (--host), VMTP expects the target hosts to be pre-provisioned with a public key in order to allow password-less SSH. For host throughput (*--host*), VMTP expects the target hosts to be pre-provisioned with a public key in order to allow password-less SSH.
Test VMs are created through OpenStack by VMTP with the appropriate public key to allow password-less ssh. By default, VMTP uses a default VMTP public key located in ssh/id_rsa.pub, simply append the content of that file into the .ssh/authorized_keys file under the host login home directory). Test VMs are created through OpenStack by VMTP with the appropriate public key to allow password-less ssh. By default, VMTP uses a default VMTP public key located in ssh/id_rsa.pub, simply append the content of that file into the .ssh/authorized_keys file under the host login home directory).
@ -347,11 +404,12 @@ TCP Throughput Measurement
The TCP throughput reported is measured using the default message size of the test tool (64KB with nuttcp). The TCP MSS (maximum segment size) used is the one suggested by the TCP-IP stack (which is dependent on the MTU). The TCP throughput reported is measured using the default message size of the test tool (64KB with nuttcp). The TCP MSS (maximum segment size) used is the one suggested by the TCP-IP stack (which is dependent on the MTU).
UDP Throughput Measurement UDP Throughput Measurement
-------------------------- --------------------------
UDP throughput is tricky because of limitations of the performance tools used, limitations of the Linux kernel used and criteria for finding the throughput to report. UDP throughput is tricky because of limitations of the performance tools used, limitations of the Linux kernel used and criteria for finding the throughput to report.
The default setting is to find the "optimal" throughput with packet loss rate within the 2%..5% range. This is achieved by successive iterations at different throughput values. The default setting is to find the "optimal" throughput with packet loss rate within the 2%~5% range. This is achieved by successive iterations at different throughput values.
In some cases, it is not possible to converge with a loss rate within that range and trying to do so may require too many iterations. The algorithm used is empiric and tries to achieve a result within a reasonable and bounded number of iterations. In most cases the optimal throughput is found in less than 30 seconds for any given flow. In some cases, it is not possible to converge with a loss rate within that range and trying to do so may require too many iterations. The algorithm used is empiric and tries to achieve a result within a reasonable and bounded number of iterations. In most cases the optimal throughput is found in less than 30 seconds for any given flow.
@ -362,6 +420,7 @@ Caveats and Known Issues
======================== ========================
* UDP throughput is not available if iperf is selected (the iperf UDP reported results are not reliable enough for iterating) * UDP throughput is not available if iperf is selected (the iperf UDP reported results are not reliable enough for iterating)
* If VMTP hangs for native hosts throughputs, check firewall rules on the hosts to allow TCP/UDP ports 5001 and TCP port 5002 * If VMTP hangs for native hosts throughputs, check firewall rules on the hosts to allow TCP/UDP ports 5001 and TCP port 5002

View File

@ -36,12 +36,13 @@ availability_zone: 'nova'
# Change this to use a different DNS server if necessary, # Change this to use a different DNS server if necessary,
dns_nameservers: [ '8.8.8.8' ] dns_nameservers: [ '8.8.8.8' ]
# VMTP can automatically create a VM image if the image named by # VMTP can automatically download a VM image if the image named by
# image_name is missing, for that you need to specify a server IP address # image_name is missing, for that you need to specify a URL where
# from which the image can be retrieved using the wget protocol # the image can be retrieved
# These 2 properties are not used if the image is present #
server_ip_for_image: '172.29.172.152' # A link to a Ubuntu Server 14.04 qcow2 image can be used here:
image_path_in_server: 'downloads/trusty-server-cloudimg-amd64-disk1.qcow2' # https://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img
vm_image_url: ''
# ----------------------------------------------------------------------------- # -----------------------------------------------------------------------------
# These variables are not likely to be changed # These variables are not likely to be changed

View File

@ -33,50 +33,73 @@ class Compute(object):
image = self.novaclient.images.find(name=image_name) image = self.novaclient.images.find(name=image_name)
return image return image
except novaclient.exceptions.NotFound: except novaclient.exceptions.NotFound:
print 'ERROR: Didnt find the image %s' % (image_name)
return None return None
def copy_and_upload_image(self, final_image_name, server_ip, image_path): def upload_image_via_url(self, glance_client, final_image_name, image_url, retry_count=60):
''' '''
Copies locally via wget and Uploads image in Nova, if image is Directly uploads image to Nova via URL if image is not present
not present on Nova post Upload, deletes it
''' '''
wget_cmd = "wget --tries=1 http://" + str(server_ip) + "/" + str(image_path) # Here is the deal:
try: # Idealy, we should better to use the python library glanceclient to perform the
subprocess.check_output(wget_cmd, shell=True) # image uploades. However, due to a limitation of the v2.0 API right now, it is
except subprocess.CalledProcessError: # impossible to tell Glance to download the image from a URL directly.
print 'ERROR: Failed to download, check filename %s via Wget' % (wget_cmd) #
return 0 # There are two steps to create the image:
# (1) Store the binary image data into Glance;
# (2) Store the metadata about the image into Glance;
# PS: The order does not matter.
#
# The REST API allows to do two steps in one if a Location header is provided with
# the POST request. (REF: http://developer.openstack.org/api-ref-image-v2.html)
#
# However the python API doesn't support a customized header in POST request.
# So we have to do two steps in two calls.
#
# The good thing is: the API does support (2) perfectly, but for (1) it is only
# accepting the data from local, not remote URL. So... Ur... Let's keep the CLI
# version as the workaround for now.
my_cwd = os.getcwd() # # upload in glance
my_file_name = os.path.basename(image_path) # image = glance_client.images.create(
abs_fname_path = my_cwd + "/" + my_file_name # name=str(final_image_name), disk_format="qcow2", container_format="bare",
rm_file_cmd = "rm " + abs_fname_path # Location=image_url)
if os.path.isfile(abs_fname_path): # glance_client.images.add_location(image.id, image_url, image)
# upload in glance
glance_cmd = "glance image-create --name=\"" + str(final_image_name) + \
"\" --disk-format=qcow2" + " --container-format=bare < " + \
str(my_file_name)
subprocess.check_output(glance_cmd, shell=True)
# remove the image file from local dir # sys.exit(0)
subprocess.check_output(rm_file_cmd, shell=True) # for retry_attempt in range(retry_count):
# if image.status == "active":
# print 'Image: %s successfully uploaded to Nova' % (final_image_name)
# return 1
# # Sleep between retries
# if self.config.debug:
# print "Image is not yet active, retrying %s of %s... [%s]" \
# % ((retry_attempt + 1), retry_count, image.status)
# time.sleep(5)
# check for the image in glance # upload in glance
glance_check_cmd = "glance image-list" glance_cmd = "glance image-create --name=\"" + str(final_image_name) + \
"\" --disk-format=qcow2" + " --container-format=bare " + \
" --is-public True --copy-from " + image_url
if self.config.debug:
print "Will update image to glance via CLI: %s" % (glance_cmd) print "Will update image to glance via CLI: %s" % (glance_cmd)
subprocess.check_output(glance_cmd, shell=True)
# check for the image in glance
glance_check_cmd = "glance image-list --name \"" + str(final_image_name) + "\""
for retry_attempt in range(retry_count):
result = subprocess.check_output(glance_check_cmd, shell=True) result = subprocess.check_output(glance_check_cmd, shell=True)
if final_image_name in result: if "active" in result:
print 'Image: %s successfully Uploaded in Nova' % (final_image_name) print 'Image: %s successfully uploaded to Nova' % (final_image_name)
return 1 return 1
else: # Sleep between retries
print 'Glance image status:\n %s' % (result) if self.config.debug:
print 'ERROR: Didnt find %s image in Nova' % (final_image_name) print "Image not yet active, retrying %s of %s..." \
return 0 % ((retry_attempt + 1), retry_count)
else: time.sleep(2)
print 'ERROR: image %s not copied over locally via %s' % (my_file_name, wget_cmd)
return 0 print 'ERROR: Cannot upload image %s from URL: %s' % (final_image_name, image_url)
return 0
# Remove keypair name from openstack if exists # Remove keypair name from openstack if exists
def remove_public_key(self, name): def remove_public_key(self, name):
@ -166,7 +189,7 @@ class Compute(object):
return True return True
# Sleep between retries # Sleep between retries
if self.config.debug: if self.config.debug:
print "[%s] VM not yet found, retrying %s of %s" \ print "[%s] VM not yet found, retrying %s of %s..." \
% (vmname, (retry_attempt + 1), retry_count) % (vmname, (retry_attempt + 1), retry_count)
time.sleep(2) time.sleep(2)
print "[%s] VM not found, after %s attempts" % (vmname, retry_count) print "[%s] VM not found, after %s attempts" % (vmname, retry_count)

47
vmtp.py
View File

@ -34,6 +34,8 @@ import pns_mongo
import sshutils import sshutils
import configure import configure
from glanceclient.v2 import client as glanceclient
from keystoneclient.v2_0 import client as keystoneclient
from neutronclient.v2_0 import client as neutronclient from neutronclient.v2_0 import client as neutronclient
from novaclient.client import Client from novaclient.client import Client
@ -186,30 +188,32 @@ class VmtpTest(object):
neutron = neutronclient.Client(**creds) neutron = neutronclient.Client(**creds)
self.comp = compute.Compute(nova_client, config) self.comp = compute.Compute(nova_client, config)
# Add the script public key to openstack # Add the script public key to openstack
self.comp.add_public_key(config.public_key_name, self.comp.add_public_key(config.public_key_name,
config.public_key_file) config.public_key_file)
self.image_instance = self.comp.find_image(config.image_name) self.image_instance = self.comp.find_image(config.image_name)
if self.image_instance is None: if self.image_instance is None:
""" if config.vm_image_url is not None:
# Try to upload the image print '%s: image for VM not found, uploading it ...' \
print '%s: image not found, will try to upload it' % (config.image_name) % (config.image_name)
self.comp.copy_and_upload_image(config.image_name, config.server_ip_for_image, keystone = keystoneclient.Client(**creds)
config.image_path_in_server) glance_endpoint = keystone.service_catalog.url_for(
time.sleep(10) service_type='image', endpoint_type='publicURL')
self.image_instance = self.comp.find_image(config.image_name) glance_client = glanceclient.Client(
""" glance_endpoint, token=keystone.auth_token)
self.comp.upload_image_via_url(
# Exit the pogram glance_client, config.image_name, config.vm_image_url)
print '%s: image not found.' % (config.image_name) else:
sys.exit(1) # Exit the pogram
print '%s: image to launch VM not found. ABORTING.' \
% (config.image_name)
sys.exit(1)
self.image_instance = self.comp.find_image(config.image_name)
self.assert_true(self.image_instance) self.assert_true(self.image_instance)
print 'Found image: %s' % (config.image_name) print 'Found image %s to launch VM, will continue' % (config.image_name)
self.flavor_type = self.comp.find_flavor(config.flavor_type) self.flavor_type = self.comp.find_flavor(config.flavor_type)
self.net = network.Network(neutron, config) self.net = network.Network(neutron, config)
# Create a new security group for the test # Create a new security group for the test
@ -573,6 +577,11 @@ if __name__ == '__main__':
action='store_true', action='store_true',
help='Stop and keep everything as-is on error (must cleanup manually)') help='Stop and keep everything as-is on error (must cleanup manually)')
parser.add_argument('--vm_image_url', dest='vm_image_url',
action='store',
help='URL to a Linux image in qcow2 format that can be downloaded from',
metavar='<url_to_image>')
(opts, args) = parser.parse_known_args() (opts, args) = parser.parse_known_args()
default_cfg_file = get_absolute_path_for_file("cfg.default.yaml") default_cfg_file = get_absolute_path_for_file("cfg.default.yaml")
@ -618,6 +627,14 @@ if __name__ == '__main__':
config.access_username = None config.access_username = None
config.access_password = None config.access_password = None
###################################################
# Cloud Image URL
###################################################
if opts.vm_image_url:
config.vm_image_url = opts.vm_image_url
else:
config.vm_image_url = None
################################################### ###################################################
# MongoDB Server connection info. # MongoDB Server connection info.
################################################### ###################################################