Former dashboard
During the test run, you can add widgets from the gallery to the dashboard to display real-time data detailing the results of your load test. This topic describes LoadRunner Cloud's former dashboard. For the new enhanced dashboard, see Dashboard.
You can also access the results after the test has completed.
Note: Before you run your load test, check that your monitor or monitoring tool is up and running and that LoadRunner Cloud is connected to it. For details, see Monitors.
Where do I find it?
-
Navigate to the Load Tests page, select a test, and navigate to the Runs tab.
or
Navigate to the Results page.
-
For a selected test run, click
(more options) and select Dashboard.
-
Switch off the New Dashboard & Report toggle
.
Note: If the New Dashboard and Report toggle is switched on, the new dashboard is displayed. For details on the new dashboard, see Dashboard.
Add a custom snapshot of a dashboard widget to the report
You can save a snapshot of a widget for a specific filter, split, and time period and display it in the report for the run.
To add a snapshot:
- Navigate to Results > Dashboard for a specific run.
- Zoom to a specific time frame in the test run.
- Configure what data is displayed in a widget.
- Click
in the widget header to save the snapshot.
For details on viewing and configuring snapshots, see Preview the snapshot in the report.
Pause scheduling during a test run
You can configure a load test to enable you to pause the schedule while the test is running. This gives you more control over the test's workload.
You can pause the scheduling multiple times during a test run, but the total pause time cannot exceed the maximum time that you set in the General settings. To enable schedule pausing, select the load test's General settings > Pause scheduling up to check box. For details, see Pause scheduling.
- To pause scheduling when viewing an active test run in the dashboard, click the Pause scheduling button
on the toolbar. A counter displays the available time that the schedule can been paused during the run.
- To resume, click
.
In the test run's report, the Pause Scheduling table displays the pause and resume instances that occurred during the run. For details, see Report.
Change the Vuser load dynamically
You can dynamically increase or decrease the number of running Vusers during the test run. In order to modify the number of running Vusers, make sure you have the Add Vusers option selected in the test's General settings. For details, see Add Vusers.
To change the number of running Vusers during a test run:
- Click
to open the Change Load dialog box.
-
For any script displayed, changed the number of current users to the new number you want. You cannot exceed the maximum number of users displayed.
Example: For any of the scripts listed you can enter any number between 0-20.
-
Click Apply.
When you remove Vusers, you are actually suspending them. If you later decide to add more Vusers, the suspended Vusers are added first.
If the script was created using VuGen, enable the Simulate a new user on each iteration option (Runtime Settings > Browser Emulation) to allow you to resume suspended Vusers during a test run.
Suspended Vusers will run a vuser_end() action at the conclusion of the test. This may cause unexpected behavior in the test results.
Note: You cannot change the load for scripts that contain rendezvous points.
Evaluate a test's status
During a test run, the test summary contains gauges displaying the current value of the following metrics:
Metric | Description |
---|---|
Running Vusers |
The total number of Vusers either running in the test or in the initialization stage. |
Suspended Vusers | The number of Vusers that have been suspended during the test with the Change load feature. |
Throughput | The amount of data received from the server every second. |
Transactions per second | The number of transactions completed per second. |
Hits Per Second | The number of hits (HTTP requests) to the Web server per second. |
Failed Vusers | The number of Vusers that have stopped running or have stopped reporting measurements due to server or script error. |
Failed Transactions | The number of transactions that did not successfully complete. |
Successful transactions | The number of transactions that have completed successfully. |
Errors | The number of script errors. |
After a test run, the test summary contains general information about what happened during the test:
Element | Description |
---|---|
Date & Time |
Date and time the test ended. |
Duration |
The duration of a test, including ramp up and tear down. |
API Vusers | A Vuser emulates a user's actions at the network API level. For example, HTTP requests and responses without rendering the UI. |
UI Vusers | A virtual user that emulates a user's actions on the UI level. For example, a button click. Since the UI is rendered for each Vuser, these Vusers consume significantly more processing resources. |
![]() |
Toggle between the various analytics:
|
Status |
The status of the test.
Note: If you selected General Settings > Group Transactions when you configured the load test, SLAs are calculated on the transaction groups and not on individual transactions. This means that a test can have a status of Passed, even though an individual transaction may have failed. |
Find out if anything went wrong with your test
The notification area alerts you to the problems that were found during the test run in the following categories:
Notification Type | Description |
---|---|
SLA warning |
Lists transactions that broke the defined SLA threshold.
|
Anomalies |
Measurements that significantly changed their behavior.
What does it tell me about my test? Each measurement is listed by rank, illustrating the likelihood that it caused the anomaly. The likelihood that a measurement caused the anomaly is determined by three criteria:
|
Script errors |
Shows the number of errors encountered while running the scripts in the test. Click the number, to display a list of the errors. From the list, you can:
Known limitations
|
LG Alerts |
Lists scripts, for each load generator, that had any of the following problems:
To add this information as a widget on the dashboard, click |
Splunk Alerts |
Lists warnings about the number of script errors encountered during a load test that could not be sent to Splunk. Note: This tab is displayed only for load tests that have been configured to stream script errors to Splunk. |
Manage a large set of tests
From the Results page, the feature enables you to group your load tests by the following columns:
- Test name
- Triggered by
- Status
- Date
You can also use the date picker to specify a date range to display load tests run for that period.
Change how time is displayed in your results
You can view results in test or clock time.
When you change the time format, all measurements, including SLA warnings and anomalies, are displayed in the selected format.
- Test time. Time is displayed according to the duration of the test. For example, if you run a test for 20 minutes, 10:00 refers to ten minutes into the test run.
- Clock time. Time is displayed according to the server time during the test run. For example, 10:00 refers to 10:00 a.m.
Change how data is displayed in the dashboard or a widget
This section describes how you can change the layout and view of the dashboard and widgets and includes the following:
Select the required button from the drop-down list to specify where to display the legend.
Widget view | Description |
---|---|
![]() |
Toggle to display the graph legend on the left. |
![]() |
Toggle to display the graph legend on bottom. |
![]() |
Toggle to hide the graph legend. |
Toggle the button to view dashboard results in different views.
Dashboard view | Description |
---|---|
![]() |
Enables you to monitor several charts at once. |
![]() |
Enables you to view measurement details. For details, see Widget layouts. |
From the widget toolbar, select the (graph) or
(summary) view.
The summary format displays the following data:
Data point | Description |
---|---|
#Values | The number of data points in the data set. |
Avg | The average of the values in the data set. |
Last | The last value collected for the data set. |
Max | The highest value in the data set. |
Med | The median value of the data set. |
Min | The lowest value in the data set. |
STD | The amount of variation from the average. |
Note:
- If the number of metrics in a widget exceeds 300, results are displayed in table view only.
- If the number of metrics in a widget exceeds 1000, only 1000 results are displayed.
Configure what data is displayed in a widget
The following table summarizes display options.
Type | Description | Options |
---|---|---|
Filter |
Displays or hides data in the widget from a data group. For example, if you have distributed your Vusers into two locations, California and Virginia, you can choose to hide the results from Virginia. |
|
Split |
Displays a line on the graph for each data group. For example, let's assume you distributed your Vusers into two geographic locations, California and Virginia. By selecting Split Location, the widget displays the results for California in one line and for Virginia in another. |
|
Layers | Hides selected elements from the graph display. |
|
Note: Not all metrics enable you to configure what data is displayed. For example, you cannot filter or split server metrics.
Zoom to a specific time frame in the test run
The time frame for which results are displayed is shown at the top of the dashboard.
You can change the time range using one of these options:
- Drag, stretch, or shrink the time slider.
- Select a preset time range (5 minutes, 15 minutes, 1 hour, All).
- Change the start and end time counters.
Configure graphs on the dashboard
Add a widget to the dashboard from the gallery.
- Click
.
-
Select one of the following measurement types:
- Client
- Transaction
- Monitor
- Mobile (for TruClient-Native Mobile tests only)
- Flex (for RTMP/RTMPT metrics only)
- MQTT (for MQTT tests only)
- User Data Points (for user-defined data points only)
-
Click a widget to add it to the dashboard.
Rearrange the dashboard
- To move a widget, drag and drop it to the selected area of the dashboard.
- To resize a widget, grab the right bottom corner and pull.
Compare two measurements in one widget
Compare in current run
Combine two measurements in one widget, from the same run, to visually compare the results.
- Select a widget, such as Running Vusers, on the dashboard.
- From the widget header, click the
button. The widget gallery opens.
-
From the gallery or the notification area, select a widget, such as Hits per Second, to merge.
- The two measurements are overlayed, each with its own set of filters.
- The original measurement is displayed as a solid line and the merged measurement is displayed as a dotted line.
You can compare two measurements, one from the current run and one from a benchmark run, in one widget, to visually compare the results.
-
To specify a run as a benchmark, navigate to the Load Tests page, select a test and click the Runs pane. Select the Benchmark radio button for run you want to set as the benchmark for the load test.
Note: In the dashboard, you can set the current load test as the benchmark by clicking
(more options) and selecting Set as benchmark.
- From the dashboard, select a widget, such as Running Vusers.
-
From the widget header, click the +Compare with Benchmark button.
- The two measurements are overlayed, each with its own set of filters.
- The current measurement is displayed as a solid line and the benchmark measurement is displayed as a dotted line.
If you want to separate the measurements, click the X from the benchmark widget header. For details, see Widget layouts.
Display load generator script assignment
For load tests run on on-premises load generators, you can display the load generators to which the scripts in the load test were assigned.
For details, see Download load generator script assignment.
See also: