During the test run, you can add widgets from the gallery to the dashboard to display real-time data detailing the results of your load test.
You can also access the results after the test has completed.
Note: Before you run your load test, check that your monitor or monitoring tool is up and running and that StormRunner Load is connected to it. For details, see Monitors.
Where do I find it?
Do one of the following:
- Navigate to the Results page.
For a selected test run, click (more options) and select Dashboard.
- Navigate to the Load Tests page.
- Select a test and navigate to the Runs tab.
- For a selected test run, click (more options) and select Dashboard.
Add a custom snapshot of a dashboard widget to the report
You can save a snapshot of a widget for a specific filter, split, and time period and display it in the report for the run.
Add a snapshot
- Navigate to Results > Dashboard for a specific run.
- Zoom to a specific time frame in the test run.
- Configure what data is displayed in a widget.
- Click in the widget header to save the snapshot.
For details on viewing and configuring snapshots, see Custom snapshot in Reports.
Click to open the dialog box.
For any script displayed, changed the number of current users to the new number you want. You cannot exceed the maximum number of users displayed.
Example: For any of the scripts listed you can enter any number between 0-20.
When you remove Vusers, you are actually suspending them. If you later decide to add more Vusers, the suspended Vusers are added first.
If the script was created using VuGen, select Runtime Settings > Browser Emulation > Simulate a new user on each iteration to enable you to resume suspended Vusers during a test run.
Suspended Vusers will run a vuser_end() action at the conclusion of the test. This may cause unexpected behavior in the test results.
Note: You cannot change the load for scripts that contain rendezvous points.
Evaluate a test's status
During a test run, the test summary contains gauges displaying the current value of the following metrics:
|Running VusersA virtual user emulates user actions in your application. Define how many Vusers will perform the business process recorded in your script during the load test.||
The total number of Vusers either running in the test or in the initialization stage.
|Suspended Vusers||The number of Vusers that have been suspended during the test with the Change load feature.|
|Throughput||The amount of data received from the server every second.|
|Transactions per second||The number of transactions completed per second.|
|Hits Per Second||The number of hits (HTTP requests) to the Web server per second.|
|Failed Vusers||The number of Vusers that have stopped running or have stopped reporting measurements due to server or script error.|
|Failed Transactions||The number of transactions that did not successfully complete.|
|Successful transactions||The number of transactions that have completed successfully.|
|Errors||The number of script errors.|
After a test run, the test summary contains general information about what happened during the test:
|Date & Time||
Date and time the test ended.
The duration of a test, including ramp up and tear down.
|API Vusers||A Vuser emulates a user's actions at the network API level. For example, HTTP requests and responses without rendering the UI.|
|UI Vusers||A virtual user that emulates a user's actions on the UI level. For example, a button click. Since the UI is rendered for each Vuser, these Vusers consume significantly more processing resources.|
Toggle between the various analytics:
The status of the test.
Note: If you selected General Settings > Group Transactions when you configured the load test, SLAs are calculated on the transaction groups and not on individual transactions. This means that a test can have a status of Passed, even though an individual transaction may have failed.
The notification area alerts you to the problems that were found during the test run in the following categories:
Lists transactions that broke the defined SLA threshold.
Indicates an SLA warning has occurred.
Indicates an SLA warning has been detected. Drill down on geographies to view it.
Measurements that significantly changed their behavior.
Indicates an anomaly has occurred.
Indicates an anomaly has been detected. Drill down on geographies and scripts to view it.
What does it tell me about my test?
Each measurement is listed by rank, illustrating the likelihood that it caused the anomaly.
The likelihood that a measurement caused the anomaly is determined by three criteria:
Shows the number of errors encountered while running the scripts in the test. Click the number, to display a list of the errors.
From the list, you can:
Lists scripts, per load generator, that caused CPU or memory consumption to exceed 85% capacity for 60 seconds or more.
To add this information as a widget on the dashboard, click .
Lists warnings about the number of script errors encountered during a load test that could not be sent to Splunk.
Note: This tab is displayed only for load tests that have been configured to stream script errors to Splunk.
From the Results page, the feature enables you to group your load tests by the following columns:
- Test name
- Triggered by
You can also use the date picker to specify a date range to display load tests run for that period.
Change how time is displayed in your results
You can view results in test or clock time.
When you change the time format, all measurements, including SLA warnings and anomalies, are displayed in the selected format.
- Test time. Time is displayed according to the duration of the test. For example, if you run a test for 20 minutes, 10:00 refers to ten minutes into the test run.
- Clock time. Time is displayed according to the server time during the test run. For example, 10:00 refers to 10:00 a.m.
Select the required button from the drop-down list to specify where to display the legend.
|Toggle to display the graph legend on the left.|
|Toggle to display the graph legend on bottom.|
|Toggle to hide the graph legend.|
Toggle the button to view dashboard results in different views.
|Enables you to monitor several charts at once.|
|Enables you to view measurement details. For details, see Widget layouts.|
Graph or Summary view
From the widget toolbar, select the (graph) or (summary) view.
The summary format displays the following data:
|#Values||The number of data points in the data set.|
|Avg||The average of the values in the data set.|
|Last||The last value collected for the data set.|
|Max||The highest value in the data set.|
|Med||The median value of the data set.|
|Min||The lowest value in the data set.|
|STD||The amount of variation from the average.|
- If the number of metrics in a widget exceeds 300, results are displayed in table view only.
- If the number of metrics in a widget exceeds 1000, only 1000 results are displayed.
The following table summarizes display options.
Displays or hides data in the widget from a data group.
For example, if you have distributed your Vusers into two locations, California and Virginia, you can choose to hide the results from Virginia.
Displays a line on the graph for each data group.
For example, let's assume you distributed your Vusers into two geographic locations, California and Virginia. By selecting Split Location, the widget displays the results for California in one line and for Virginia in another.
|Layers||Hides selected elements from the graph display.||
Note: Not all metrics enable you to configure what data is displayed. For example, you cannot filter or split server metrics.
The time frame for which results are displayed is shown at the top of the dashboard.
You can change the time range using one of these options:
- Drag, stretch, or shrink the time slider.
- Select a preset time range (5 minutes, 15 minutes, 1 hour, All).
- Change the start and end time counters.
Configure graphs on the dashboard
Add a widget to the dashboard from the gallery.
- Click .
Select one of the following measurement types:
- Mobile (for TruClient-Native Mobile tests only)
- Flex (for RTMP/RTMPT metrics only)
- MQTT (for MQTT tests only)
- User Data Points (for user-defined data points only)
Click a widget to add it to the dashboard.
Rearrange the dashboard
- To move a widget, drag and drop it to the selected area of the dashboard.
- To resize a widget, grab the right bottom corner and pull.
Compare two measurements in one widget
Compare in current run
Combine two measurements in one widget, from the same run, to visually compare the results.
- Select a widget, such as Running Vusers, on the dashboard.
- From the widget header, click the button. The widget gallery opens.
From the gallery or the notification area, select a widget, such as Hits per Second, to merge.
- The two measurements are overlayed, each with its own set of filters.
- The original measurement is displayed as a solid line and the merged measurement is displayed as a dotted line.
You can compare two measurements, one from the current run and one from a benchmark run, in one widget, to visually compare the results.
To specify a test as a benchmark, navigate to Load Tests > Runs and select a run. Click (more options) and select Set as benchmark.
- From the dashboard, select a widget, such as Running Vusers.
From the widget header, click the +Compare with Benchmark button.
- The two measurements are overlayed, each with its own set of filters.
- The current measurement is displayed as a solid line and the benchmark measurement is displayed as a dotted line.
If you want to separate the measurements, click the X from the benchmark widget header.
You can export your test data to a .csv file.
- Select Results and for the test for which you want to export data, click (more options) and select CSV.
From the dialog box, do the following:
Action How to
Select a time frame
Use the time slider to specify the time frame to export.
Select metrics Select which metrics to include in the report. Calculate
Based on the time frame and metrics selected, calculate the highest level of data granularity (in seconds) available.
Note: The shorter the time frame and the fewer metrics selected, the higher is the level of data granularity available.
Set data granularity
Specify the number of seconds between each data point by entering a multiple of the calculated data granularity available.
Export Start the export to a .csv file.
Download the .csv file.
- In the menu bar, click to open the Notification area.
- The notification entry for the export shows whether the export is still in progress or if it has completed.
If it has completed, click the Download button in the notification entry to download the .csv file.
Note: The .csv file is available for download for one week. After this time the export is marked as expired.
Optionally, you can view more details about the export by clicking the arrow to the right of the notification entry.
Note: You can export a maximum of three .pdf and .csv files simultaneously.
The following table summarizes the data columns contained in the file.
The network time stamp (in milliseconds).
Tip: In Excel, to convert the time stamp to a time format, use the following formula:
|clock_time||The real time stamp based on the ISO 8601 standard.|
|metric||The name of the metric. For example, Running Vusers or ATRT.|
|val||The value of the metric.|
|dim1, dim2, dim3...||Depending on the metric type, other values are exported. For example, geographic area, script name, server name, or transaction name.|
Display load generator script assignment
For load tests run on on-premises load generators, you can display the load generators to which the scripts in the load test were assigned.
To display the assignments, click (more options) and select Download > LG Scripts. The assignments are downloaded to an Excel file that shows the names of both the on-premises load generator, and the assigned script.
For details on assigning scripts to run on specific on-premises load generators for a load test, see Enable script assignment.