Former dashboard

During the test run, you can add widgets from the gallery to the dashboard to display real-time data detailing the results of your load test. This topic describes LoadRunner Cloud's former dashboard. For the new enhanced dashboard, see Dashboard.

You can also access the results after the test has completed.

Note: Before you run your load test, check that your monitor or monitoring tool is up and running and that LoadRunner Cloud is connected to it. For details, see Monitors.

Where do I find it?

  1. Navigate to the Load Tests page, select a test, and navigate to the Runs tab.

    or

    Navigate to the Results page.

  2. For a selected test run, click (more options) and select Dashboard.

  3. Switch off the New Dashboard & Report toggle .

Note: If the New Dashboard and Report toggle is switched on, the new dashboard is displayed. For details on the new dashboard, see Dashboard.

Back to top

Add a custom snapshot of a dashboard widget to the report

You can save a snapshot of a widget for a specific filter, split, and time period and display it in the report for the run.

To add a snapshot:

  1. Navigate to Results > Dashboard for a specific run.
  2. Zoom to a specific time frame in the test run.
  3. Configure what data is displayed in a widget.
  4. Click in the widget header to save the snapshot.

For details on viewing and configuring snapshots, see Preview the snapshot in the report.

Back to top

Pause scheduling during a test run

You can configure a load test to enable you to pause the schedule while the test is running. This gives you more control over the test's workload.

You can pause the scheduling multiple times during a test run, but the total pause time cannot exceed the maximum time that you set in the General settings. To enable schedule pausing, select the load test's General settings > Pause scheduling up to check box. For details, see Pause scheduling.

  • To pause scheduling when viewing an active test run in the dashboard, click the Pause scheduling button on the toolbar. A counter displays the available time that the schedule can been paused during the run.
  • To resume, click .

In the test run's report, the Pause Scheduling table displays the pause and resume instances that occurred during the run. For details, see Report.

Back to top

Change the Vuser load dynamically

You can dynamically increase or decrease the number of running Vusers during the test run. In order to modify the number of running Vusers, make sure you have the Add Vusers option selected in the test's General settings. For details, see Add Vusers.

To change the number of running Vusers during a test run:

  1. Click to open the Change Load dialog box.
  2. For any script displayed, changed the number of current users to the new number you want. You cannot exceed the maximum number of users displayed.

    Example: For any of the scripts listed you can enter any number between 0-20.

  3. Click Apply.

When you remove Vusers, you are actually suspending them. If you later decide to add more Vusers, the suspended Vusers are added first.

If the script was created using VuGen, enable the Simulate a new user on each iteration option (Runtime Settings > Browser Emulation) to allow you to resume suspended Vusers during a test run.

Suspended Vusers will run a vuser_end() action at the conclusion of the test. This may cause unexpected behavior in the test results.

Note: You cannot change the load for scripts that contain rendezvous points.

Back to top

Evaluate a test's status

During a test run, the test summary contains gauges displaying the current value of the following metrics:

Metric Description
Running VusersClosedA virtual user emulates user actions in your application. Define how many Vusers will perform the business process recorded in your script during the load test.

The total number of Vusers either running in the test or in the initialization stage.

Suspended Vusers The number of Vusers that have been suspended during the test with the Change load feature.
Throughput The amount of data received from the server every second.
Transactions per second The number of transactions completed per second.
Hits Per Second The number of hits (HTTP requests) to the Web server per second.
Failed Vusers The number of Vusers that have stopped running or have stopped reporting measurements due to server or script error.
Failed Transactions The number of transactions that did not successfully complete.
Successful transactions The number of transactions that have completed successfully.
Errors The number of script errors.

After a test run, the test summary contains general information about what happened during the test:

Element Description
Date & Time

Date and time the test ended.

Duration

The duration of a test, including ramp up and tear down.

API Vusers A Vuser emulates a user's actions at the network API level. For example, HTTP requests and responses without rendering the UI.
UI Vusers A virtual user that emulates a user's actions on the UI level. For example, a button click. Since the UI is rendered for each Vuser, these Vusers consume significantly more processing resources.
(more options)

Toggle between the various analytics:

Status

The status of the test.

  • Aborted. You stopped the test during initialization.
  • Stopped. You stopped the test and no SLAs were broken.
  • Failed. The test has ended and one or more SLAs were broken.

  • Halted. LoadRunner Cloud stopped the test due to a security limitation.
  • Passed. The test has ended without any failed transactions.
  • Running. The test is running.
  • System error. LoadRunner Cloud was unable to run the test due to a server error.

Note: If you selected General Settings > Group Transactions when you configured the load test, SLAs are calculated on the transaction groups and not on individual transactions. This means that a test can have a status of Passed, even though an individual transaction may have failed.

Back to top

Find out if anything went wrong with your test

The notification area alerts you to the problems that were found during the test run in the following categories:

Notification Type Description
SLA warning

Lists transactions that broke the defined SLA threshold.

Indicates an SLA warning has occurred.

Indicates an SLA warning has been detected. Drill down on geographies to view it.

Anomalies

Measurements that significantly changed their behavior.

Indicates an anomaly has occurred.

Indicates an anomaly has been detected. Drill down on geographies and scripts to view it.

What does it tell me about my test?

Each measurement is listed by rank, illustrating the likelihood that it caused the anomaly.

The likelihood that a measurement caused the anomaly is determined by three criteria:

  • Which data point was the earliest to deviate from the baseline?
  • How big was the deviation?
  • How long was the deviation?
Script errors

Shows the number of errors encountered while running the scripts in the test. Click the number, to display a list of the errors.

From the list, you can:

  • Click to add the general error information as a widget on the dashboard.
  • Click the row of a specific error to add an Errors per second graph for the error to the dashboard.
  • For TruClient scripts, click to view a snapshot of the error.
  • Click to download all the error snapshots to a zip file. Each error snapshot is included as a separate file within the zip file.

Known limitations

  • The number of snapshots generated per test is limited to 20.
  • For each unique error id, only one snapshot is generated.
LG Alerts

Lists scripts, for each load generator, that had any of the following problems:

  • Caused CPU or memory consumption to exceed 85% capacity for 60 seconds or more.
  • Caused the load generator's used disk space to reach 95% or higher.
  • Encountered connectivity problems.

To add this information as a widget on the dashboard, click .

Splunk Alerts

Lists warnings about the number of script errors encountered during a load test that could not be sent to Splunk.

Note: This tab is displayed only for load tests that have been configured to stream script errors to Splunk.

Back to top

Manage a large set of tests

From the Results page, the feature enables you to group your load tests by the following columns:

  • Test name
  • Triggered by
  • Status
  • Date

Example:  

You can also use the date picker to specify a date range to display load tests run for that period.

Back to top

Change how time is displayed in your results

You can view results in test or clock time.

When you change the time format, all measurements, including SLA warnings and anomalies, are displayed in the selected format.

  • Test time. Time is displayed according to the duration of the test. For example, if you run a test for 20 minutes, 10:00 refers to ten minutes into the test run.
  • Clock time. Time is displayed according to the server time during the test run. For example, 10:00 refers to 10:00 a.m.

Back to top

Change how data is displayed in the dashboard or a widget

This section describes how you can change the layout and view of the dashboard and widgets and includes the following:

Widget views

Select the required button from the drop-down list to specify where to display the legend.

Widget view Description
Toggle to display the graph legend on the left.
Toggle to display the graph legend on bottom.
Toggle to hide the graph legend.

Dashboard view

Toggle the button to view dashboard results in different views.

Dashboard view Description
Enables you to monitor several charts at once.
Enables you to view measurement details. For details, see Widget layouts.

Graph or Summary view

From the widget toolbar, select the (graph) or (summary) view.

The summary format displays the following data:

Data point Description
#Values The number of data points in the data set.
Avg The average of the values in the data set.
Last The last value collected for the data set.
Max The highest value in the data set.
Med The median value of the data set.
Min The lowest value in the data set.
STD The amount of variation from the average.

Note:  

  • If the number of metrics in a widget exceeds 300, results are displayed in table view only.
  • If the number of metrics in a widget exceeds 1000, only 1000 results are displayed.

Back to top

Configure what data is displayed in a widget

The following table summarizes display options.

Type Description Options
Filter

Displays or hides data in the widget from a data group.

For example, if you have distributed your Vusers into two locations, California and Virginia, you can choose to hide the results from Virginia.

  • Location
  • Script name
  • Emulation
Split

Displays a line on the graph for each data group.

For example, let's assume you distributed your Vusers into two geographic locations, California and Virginia. By selecting Split Location, the widget displays the results for California in one line and for Virginia in another.

  • Location
  • Script name
  • Error Id
  • Emulation
  • Transaction status
  • HTTP status code
Layers Hides selected elements from the graph display.
  • y-Axis
  • x-Axis
  • Baseline sleeve
  • SLA breaks
  • SLA Line

Note: Not all metrics enable you to configure what data is displayed. For example, you cannot filter or split server metrics.

Back to top

Zoom to a specific time frame in the test run

The time frame for which results are displayed is shown at the top of the dashboard.

You can change the time range using one of these options:

  • Drag, stretch, or shrink the time slider.
  • Select a preset time range (5 minutes, 15 minutes, 1 hour, All).
  • Change the start and end time counters.

Back to top

Configure graphs on the dashboard

Add a widget to the dashboard from the gallery.

  1. Click .
  2. Select one of the following measurement types:

    • Client
    • Transaction
    • Monitor
    • Mobile (for TruClient-Native Mobile tests only)
    • Flex (for RTMP/RTMPT metrics only)
    • MQTT (for MQTT tests only)
    • User Data Points (for user-defined data points only)
  3. Click a widget to add it to the dashboard.

Rearrange the dashboard

  1. To move a widget, drag and drop it to the selected area of the dashboard.
  2. To resize a widget, grab the right bottom corner and pull.

Back to top

Compare two measurements in one widget

Compare in current run

Combine two measurements in one widget, from the same run, to visually compare the results.

  1. Select a widget, such as Running Vusers, on the dashboard.
  2. From the widget header, click the button. The widget gallery opens.
  3. From the gallery or the notification area, select a widget, such as Hits per Second, to merge.

    • The two measurements are overlayed, each with its own set of filters.
    • The original measurement is displayed as a solid line and the merged measurement is displayed as a dotted line.

    Example:  

Compare to a benchmark run

You can compare two measurements, one from the current run and one from a benchmark run, in one widget, to visually compare the results.

  1. To specify a run as a benchmark, navigate to the Load Tests page, select a test and click the Runs pane. Select the Benchmark radio button for run you want to set as the benchmark for the load test.

    Note: In the dashboard, you can set the current load test as the benchmark by clicking (more options) and selecting Set as benchmark.

  2. From the dashboard, select a widget, such as Running Vusers.
  3. From the widget header, click the +Compare with Benchmark button.

    • The two measurements are overlayed, each with its own set of filters.
    • The current measurement is displayed as a solid line and the benchmark measurement is displayed as a dotted line.

If you want to separate the measurements, click the X from the benchmark widget header. For details, see Widget layouts.

Back to top

Display load generator script assignment

For load tests run on on-premises load generators, you can display the load generators to which the scripts in the load test were assigned.

For details, see Download load generator script assignment.

Back to top

See also: