Dashboard

During the test run, you can add widgets from the gallery to the dashboard to display real-time data detailing the results of your load test.

You can also access the results after the test has completed.

Note: Before you run your load test, check that your monitor or monitoring tool is up and running and that StormRunner Load is connected to it. For details, see Monitors.

Where do I find it?

Do one of the following:

Results page

  1. Navigate to the Results page.
  2. For a selected test run, click (more options) and select Dashboard.

Load Tests

  1. Navigate to the Load Tests page.
  2. Select a test and navigate to the Runs tab.
  3. For a selected test run, click (more options) and select Dashboard.

Back to top

Add a custom snapshot of a dashboard widget to the report

You can save a snapshot of a widget for a specific filter, split, and time period and display it in the report for the run.

Add a snapshot

  1. Navigate to Results > Dashboard for a specific run.
  2. Zoom to a specific time frame in the test run.
  3. Configure what data is displayed in a widget.
  4. Click in the widget header to save the snapshot.

For details on viewing and configuring snapshots, see Custom snapshot in Reports.

Back to top

Dynamically increase or decrease running Vusers during the test run

  1. Click to open the dialog box.

  2. For any script displayed, changed the number of current users to the new number you want. You cannot exceed the maximum number of users displayed.

    Example: For any of the scripts listed you can enter any number between 0-20.

  3. Click Apply.

When you remove Vusers, you are actually suspending them. If you later decide to add more Vusers, the suspended Vusers are added first.

If the script was created using VuGen, select Runtime Settings > Browser Emulation > Simulate a new user on each iteration to enable you to resume suspended Vusers during a test run.

Suspended Vusers will run a vuser_end() action at the conclusion of the test. This may cause unexpected behavior in the test results.

Note: You cannot change the load for scripts that contain rendezvous points.

Back to top

Evaluate a test's status

During a test run, the test summary contains gauges displaying the current value of the following metrics:

Metric Description
Running VusersA virtual user emulates user actions in your application. Define how many Vusers will perform the business process recorded in your script during the load test.

The total number of Vusers either running in the test or in the initialization stage.

Suspended Vusers The number of Vusers that have been suspended during the test with the Change load feature.
Throughput The amount of data received from the server every second.
Transactions per second The number of transactions completed per second.
Hits Per Second The number of hits (HTTP requests) to the Web server per second.
Failed Vusers The number of Vusers that have stopped running or have stopped reporting measurements due to server or script error.
Failed Transactions The number of transactions that did not successfully complete.
Successful transactions The number of transactions that have completed successfully.
Errors The number of script errors.

After a test run, the test summary contains general information about what happened during the test:

Element Description
Date & Time

Date and time the test ended.

Duration

The duration of a test, including ramp up and tear down.

API Vusers A Vuser emulates a user's actions at the network API level. For example, HTTP requests and responses without rendering the UI.
UI Vusers A virtual user that emulates a user's actions on the UI level. For example, a button click. Since the UI is rendered for each Vuser, these Vusers consume significantly more processing resources.
(more options)

Toggle between the various analytics:

Status

The status of the test.

  • Aborted. You stopped the test during initialization.
  • Stopped. You stopped the test and no SLAs were broken.
  • Failed. The test has ended and one or more SLAs were broken.

  • Halted. StormRunner Load stopped the test due to a security limitation.
  • Passed. The test has ended without any failed transactions.
  • Running. The test is running.
  • System error. StormRunner Load was unable to run the test due to a server error.

Note: If you selected General Settings > Group Transactions when you configured the load test, SLAs are calculated on the transaction groups and not on individual transactions. This means that a test can have a status of Passed, even though an individual transaction may have failed.

Back to top

Find out if anything went wrong with your test

The notification area alerts you to the problems that were found during the test run in the following categories:

Notification Type Description
SLA warning

Lists transactions that broke the defined SLA threshold.

Indicates an SLA warning has occurred.

Indicates an SLA warning has been detected. Drill down on geographies to view it.

Anomalies

Measurements that significantly changed their behavior.

Indicates an anomaly has occurred.

Indicates an anomaly has been detected. Drill down on geographies and scripts to view it.

What does it tell me about my test?

Each measurement is listed by rank, illustrating the likelihood that it caused the anomaly.

The likelihood that a measurement caused the anomaly is determined by three criteria:

  • Which data point was the earliest to deviate from the baseline?
  • How big was the deviation?
  • How long was the deviation?
Script errors

Shows the number of errors encountered while running the scripts in the test. Click the number, to display a list of the errors.

From the list, you can:

  • Click to add the general error information as a widget on the dashboard.
  • Click the row of a specific error to add an Errors per second graph for the error to the dashboard.
  • For TruClient scripts, click to view a snapshot of the error.
  • Click to download all the error snapshots to a zip file. Each error snapshot is included as a separate file within the zip file.

Known limitations

  • The number of snapshots generated per test is limited to 20.
  • For each unique error id, only one snapshot is generated.
LG Alerts

Lists scripts, per load generator, that caused CPU or memory consumption to exceed 85% capacity for 60 seconds or more.

To add this information as a widget on the dashboard, click .

Splunk Alerts

Lists warnings about the number of script errors encountered during a load test that could not be sent to Splunk.

Note: This tab is displayed only for load tests that have been configured to stream script errors to Splunk.

Back to top

Manage a large set of tests

From the Results page, the feature enables you to group your load tests by the following columns:

  • Test name
  • Triggered by
  • Status
  • Date

Example:  

You can also use the date picker to specify a date range to display load tests run for that period.

Back to top

Change how time is displayed in your results

You can view results in test or clock time.

When you change the time format, all measurements, including SLA warnings and anomalies, are displayed in the selected format.

  • Test time. Time is displayed according to the duration of the test. For example, if you run a test for 20 minutes, 10:00 refers to ten minutes into the test run.
  • Clock time. Time is displayed according to the server time during the test run. For example, 10:00 refers to 10:00 a.m.

Back to top

Change how data is displayed in the dashboard or a widget

Widget views

Select the required button from the drop-down list to specify where to display the legend.

Widget view Description
Toggle to display the graph legend on the left.
Toggle to display the graph legend on bottom.
Toggle to hide the graph legend.

Dashboard view

Toggle the button to view dashboard results in different views.

Dashboard view Description
Enables you to monitor several charts at once.
Enables you to view measurement details. For details, see Widget layouts.

Graph or Summary view

From the widget toolbar, select the (graph) or (summary) view.

The summary format displays the following data:

Data point Description
#Values The number of data points in the data set.
Avg The average of the values in the data set.
Last The last value collected for the data set.
Max The highest value in the data set.
Med The median value of the data set.
Min The lowest value in the data set.
STD The amount of variation from the average.

Note:  

  • If the number of metrics in a widget exceeds 300, results are displayed in table view only.
  • If the number of metrics in a widget exceeds 1000, only 1000 results are displayed.

Back to top

Configure what data is displayed in a widget

The following table summarizes display options.

Type Description Options
Filter

Displays or hides data in the widget from a data group.

For example, if you have distributed your Vusers into two locations, California and Virginia, you can choose to hide the results from Virginia.

  • Location
  • Script name
  • Emulation
Split

Displays a line on the graph for each data group.

For example, let's assume you distributed your Vusers into two geographic locations, California and Virginia. By selecting Split Location, the widget displays the results for California in one line and for Virginia in another.

  • Location
  • Script name
  • Error Id
  • Emulation
  • Transaction status
  • HTTP status code
Layers Hides selected elements from the graph display.
  • y-Axis
  • x-Axis
  • Baseline sleeve
  • SLA breaks
  • SLA Line

Note: Not all metrics enable you to configure what data is displayed. For example, you cannot filter or split server metrics.

Back to top

Zoom to a specific time frame in the test run

The time frame for which results are displayed is shown at the top of the dashboard.

You can change the time range using one of these options:

  • Drag, stretch, or shrink the time slider.
  • Select a preset time range (5 minutes, 15 minutes, 1 hour, All).
  • Change the start and end time counters.

Back to top

Configure graphs on the dashboard

Add a widget to the dashboard from the gallery.

  1. Click .
  2. Select one of the following measurement types:

    • Client
    • Transaction
    • Monitor
    • Mobile (for TruClient-Native Mobile tests only)
    • Flex (for RTMP/RTMPT metrics only)
    • MQTT (for MQTT tests only)
    • User Data Points (for user-defined data points only)
  3. Click a widget to add it to the dashboard.

Rearrange the dashboard

  1. To move a widget, drag and drop it to the selected area of the dashboard.
  2. To resize a widget, grab the right bottom corner and pull.

Back to top

Compare two measurements in one widget

Compare in current run

Combine two measurements in one widget, from the same run, to visually compare the results.

  1. Select a widget, such as Running Vusers, on the dashboard.
  2. From the widget header, click the button. The widget gallery opens.
  3. From the gallery or the notification area, select a widget, such as Hits per Second, to merge.

    • The two measurements are overlayed, each with its own set of filters.
    • The original measurement is displayed as a solid line and the merged measurement is displayed as a dotted line.

    Example:  

Compare to a benchmark run

You can compare two measurements, one from the current run and one from a benchmark run, in one widget, to visually compare the results.

  1. To specify a test as a benchmark, navigate to Load Tests > Runs and select a run. Click (more options) and select Set as benchmark.

  2. From the dashboard, select a widget, such as Running Vusers.
  3. From the widget header, click the +Compare with Benchmark button.

    • The two measurements are overlayed, each with its own set of filters.
    • The current measurement is displayed as a solid line and the benchmark measurement is displayed as a dotted line.

If you want to separate the measurements, click the X from the benchmark widget header.

Explore widget layout

Back to top

Export data to a .csv file

You can export your test data to a .csv file.

  1. Select Results and for the test for which you want to export data, click (more options) and select CSV.
  2. From the dialog box, do the following:

    Action How to

    Select a time frame

    Use the time slider to specify the time frame to export.

    Select metrics Select which metrics to include in the report.
    Calculate

    Based on the time frame and metrics selected, calculate the highest level of data granularity (in seconds) available.

    Note: The shorter the time frame and the fewer metrics selected, the higher is the level of data granularity available.

    Set data granularity

    Specify the number of seconds between each data point by entering a multiple of the calculated data granularity available.

    Export Start the export to a .csv file.
  3. Download the .csv file.

    1. In the menu bar, click to open the Notification area.
    2. The notification entry for the export shows whether the export is still in progress or if it has completed.
    3. If it has completed, click the Download button in the notification entry to download the .csv file.

      Note: The .csv file is available for download for one week. After this time the export is marked as expired.

    4. Optionally, you can view more details about the export by clicking the arrow to the right of the notification entry.

    Note: You can export a maximum of three .pdf and .csv files simultaneously.

The following table summarizes the data columns contained in the file.

Column name Description

time_stamp

The network time stamp (in milliseconds).

Tip: In Excel, to convert the time stamp to a time format, use the following formula: =<Timestamp>/1000/86400 and set the cell format to Time with the following type: *13:30:55

clock_time The real time stamp based on the ISO 8601 standard.
metric The name of the metric. For example, Running Vusers or ATRT.
val The value of the metric.
dim1, dim2, dim3... Depending on the metric type, other values are exported. For example, geographic area, script name, server name, or transaction name.

Back to top

Display load generator script assignment

For load tests run on on-premises load generators, you can display the load generators to which the scripts in the load test were assigned.

To display the assignments, click (more options) and select Download > LG Scripts. The assignments are downloaded to an Excel file that shows the names of both the on-premises load generator, and the assigned script.

For details on assigning scripts to run on specific on-premises load generators for a load test, see Enable script assignment.

Back to top

See also: