change monitoring's result page and add doc

Change-Id: Ia18754500e82b53919360c4d99e3bee44d07024b
This commit is contained in:
Xin 2016-04-05 13:01:47 -07:00
parent 9b4351644f
commit ee9733c639
5 changed files with 34 additions and 6 deletions

View File

@ -26,8 +26,8 @@ The table shows the results for each iteration step, with the requested and
measured RPS (HTTP requests per second) and the corresponding aggregated measured RPS (HTTP requests per second) and the corresponding aggregated
download throughput (the sum of all downloads for all clients). download throughput (the sum of all downloads for all clients).
The latency distribution is shows in the chart, where each line corresponds to Each line in the chart represents the latency distribution for one load level
one load level (or iteration in the progression). Lines can be individually (or iteration in the progression). Lines can be individually
shown/hidden by clicking on the corresponding legend item. shown/hidden by clicking on the corresponding legend item.
For example, the largest scale involves 20,000 simultaneous users sending an For example, the largest scale involves 20,000 simultaneous users sending an
@ -44,6 +44,34 @@ during the scale test.
:target: https://htmlpreview.github.io/?https://github.com/openstack/kloudbuster/blob/master/doc/source/gallery/http.html :target: https://htmlpreview.github.io/?https://github.com/openstack/kloudbuster/blob/master/doc/source/gallery/http.html
Sample HTTP Monitoring Report
-----------------------------
The report below shows an HTTP monitoring run with 15 HTTP servers where each
HTTP server is receiving HTTP requests from 1 HTTP traffic generator that runs
in a separate VM and emulates 1,000 users sending 1 request per second each
(for a total of 1000 requests per second per HTTP server). The topology used
for the test is 1 tenant, 1 router, 3 networks and 5 HTTP servers per network.
The total duration is set to 300 seconds. This scale settings can beviewed in
the Configuration tab.
This stacked chart updates in real time by scrolling to the left and shows
how the latency of HTTP requests evolves over time for each percentile group
(50%, 75%, 90%, 99%99.9% 99.99%, 99.999%). Lines can be individually
shown/hidden by clicking on the corresponding legend item.
The compute node where the HTTP servers run is protected against individual
link failures by using a link aggregation with 2 physical links connected to
2 different switches.
At 12:19:53, one of the 2 physical links is purposely shut down. As can be seen,
the latency spikes as high as 1664 msec and returns to normal after about 10
seconds.
.. image:: images/kb-http-monitoring.png
Sample Storage Scale Report Sample Storage Scale Report
--------------------------- ---------------------------

Binary file not shown.

After

Width:  |  Height:  |  Size: 124 KiB

File diff suppressed because one or more lines are too long

View File

@ -125,7 +125,7 @@ angular.module('kbWebApp')
}) })
.service('kbHttp', function ($http, $q) { .service('kbHttp', function ($http, $q) {
var backendUrl = $(location).attr('protocol') +"//" + $(location).attr('host') + "/api"; var backendUrl = $(location).attr('protocol') +"//" + $(location).attr('host') + "/api";
// var backendUrl = "http://127.0.0.1:8080/api"; //var backendUrl = "http://127.0.0.1:8080/api";
this.getMethod = function (url) { this.getMethod = function (url) {
var deferred = $q.defer(); // declaration var deferred = $q.defer(); // declaration

File diff suppressed because one or more lines are too long