LoadRunner Cloud's dashboard enables you to analyze the performance of a selected run for a load test.

Tip: Hover over a data point to display a tooltip that shows the metric name, the value, and the ID run.

Dashboard overview

LoadRunner Cloud's dashboard displays information about your load test run. Open the dashboard in one of the following ways:

  • Go to the Load Tests page, click on a test, and navigate to the Runs pane. Click on a run.

  • Open the Results page and click (more options) in the row of the selected test run. Click Dashboard.

Note: If the New Dashboard and Report toggle is not switched on, move the toggle to ON. For details about the old dashboard, see Former dashboard.

Four graphs are opened in the default dashboard view, one for each of the following metrics:

  • Running Vusers

  • Hits per second

  • Throughput

  • Errors

Add dashboard views

A dashboard view is contained in a tab. You can create up to 20 tabs for a load test run, with each tab showing a different dashboard.

For example, for run number 10 of a load test:

  • Tab 1 can show metrics A, B, and C and compare the selected run to another run (number 11).
  • Tab 2 can show metrics X, Y, and Z and compare the selected run to a different run (number 12).

The tabs are persistent for the current user and for the specific load test. This means that once you create a dashboard for a load test run, any tabs you create (both the initial tab and any additional ones) are applicable for all runs of the same load test.

Tab controls are located at the bottom of the dashboard. To create and work with tabs, do one of the following:

Action Method
Create a tab
  • Right-click an existing tab and select New tab.

  • Click .

The new tab that is created contains metrics for the latest run of the selected load test. If there is an active run of the load test (that is, it is currently running) then that is considered the latest run.

Rename a tab Right-click a tab and select Rename.
Duplicate a tab Right-click a tab and select Duplicate. The name of the new, duplicated tab is the same as the original tab name with the addition of -1. For example, if the original tab name is MyTab, the duplicated tab name is MyTab-1.
Close a tab
  • Right-click a tab and select Close.
  • Click the X next to the tab name.


  • A closed tab cannot be re-opened.

  • There must be at least one tab open.

A dashboard view comprises two panes. The left pane shows which metrics you are seeing and provides various configuration options. The right pane shows the actual data in multiple graphs.

Tip: Click TRX Summary in the top of the dashboard to open a new tab with a summary of the transactions. To return to the graphs displaying the selected metrics, click its tab (by default Tab -x).

Back to top

Choose metrics

Use the following actions to configure the metrics you want to include in the dashboard.

Action Description
Select or deselect a metric for viewing
  • To select a metric, click the empty cell to the left of the metric name. The icon denotes that a metric is selected and appears in the currently active graph. A grayed icon indicates that the metric is included in a graph that is currently inactive.

  • To deselect a metric, click the icon next to the metric name.

  • To hide all metrics, click Hide all above the metrics tree.

The number of selected metrics is displayed at the bottom of the Metrics pane.

Select or deselect sub-metrics by type

Right-click a metric name to display a menu showing the number of sub-metrics by type. For example, click a script name to show the number of related transactions, locations, emulations, and so forth. Select a metric type and then click Show or Hide to show or hide all the metrics of that type under the parent metric.

Note: When hiding sub-metrics, you can also select All to hide all the metrics under the parent metric, regardless of their type.

Search for a metric Enter a string in the search box and click the magnifying glass icon. The metrics whose name contains the string are displayed in the tree.
Open or close a metric node Click the arrow next to a metric name to open or close the tree for that specific metric.
Open all selected metrics Click Expand visible to open the tree for all selected metrics.
Resize value columns

Click next to the run number at the top of a Value column and select:

  • Autosize This Column (for one column)
  • Autosize All Columns (for all columns)
Highlight metrics

To highlight a metric, click on it. To highlight multiple metrics, use Ctrl+click or Shift+click.

The data lines for the selected metrics are also highlighted in the data graph.


  • When you select or deselect metrics, they are only added to or removed from the currently active graph.
  • Transaction response time breakdown metrics are not added to the currently active graph. Instead, each such metric is displayed in its own graph. You cannot add other metrics to these graphs.
  • You can include a combined total of 1,000 metrics in all of the graphs in the dashboard.

Empty data

Empty metrics are grayed out and displayed in italics (both in the metrics pane and in the graph and table legends). These are metrics that were carried over from a previous run, but for which there is no data. This may be due to a change in a script or an on-premises load generator that is no longer available. Alternatively it may be a temporary state due to the fact that data has not yet been generated for the metric.

Icon Description

Clears a single empty metric with no data from the dashboard.

Clears all empty metrics. Alternatively, right-click a metric name and from the menu displayed, and select Clear empty metrics.

SLA breaches

Metrics that breach a configured SLA, or have anomalies, are further denoted as follows:

Icon Description
An SLA was breached, but the breach did not cause the load test run to fail.
An SLA was breached and the breach caused the load test run to fail. This icon also represents the detection of an anomaly together with an SLA breach (or a parent metric with an SLA breach and an anomaly for its children).
An anomaly was detected.

The percentage of SLA breaks for the metric is displayed in the Break column and the anomaly rank and number of anomalies is displayed in the Anomaly column.

Back to top

Monitors and health metrics

The dashboard's Monitors and LG health monitoring nodes show metrics from several monitor types:

  • Server and application monitors

  • Windows on-premises load generator monitors

  • Cloud-based load generator system health

You can customize the columns to display as you would with other metrics. For example, you can show the minimum, average, maximum, and median values for each metric.

Server and application monitors

If server or application monitors were configured to work with your tenant, you will be able to select their metrics under the Metrics pane's Monitors node. These monitors include SiteScope, New Relic, Dynatrace, and AppDynamics. For details, see Monitors.

Windows on-premises load generator monitors

If your test ran on Windows on-premises load generators, LoadRunner Cloud captures and monitors resource usage. You can view these metrics in the Monitors node, even without any prior monitor configuration. The monitored metrics correspond to common Windows performance measurements. The displayed metrics are:

  • \LogicalDisk(_Total)\% Free Space

  • \LogicalDisk(_Total)\Current Disk Queue Length

  • \LogicalDisk(_Total)\Disk Read Bytes/sec

  • \LogicalDisk(_Total)\Disk Write Bytes/sec

  • \LogicalDisk(_Total)\Free Megabytes

  • \Memory\% Used

  • \Memory\Available Bytes

  • \Memory\Page Faults/sec

  • \Memory\Pages/sec

  • \Processor(_Total)\% Processor Time

  • \System\Processor Queue Length

Cloud-based load generator metrics

Metrics indicating the health of your cloud load generator machines are located under the LG health Monitoring node. If your test ran on cloud-based load generators, you can view the following metrics under the LG health monitoring node:

  • CPU

  • Memory

Note: The monitoring of cloud-based load generators is disabled by default. To enable it for your tenant, open a support ticket.

Back to top

Choose test runs

You can configure the dashboard to show a specific test run or to compare two test runs.

Show a specific a test run

To change the test run displayed in the dashboard, navigate to the project node in the masthead to exit the dashboard. Open the Runs pane and select a different run.

Compare two load test runs

You can compare the selected run to another run of the same load test. The relevant metrics for the additional run are added to all the graphs in the dashboard.

To add a run for comparison, click and in the dialog box that opens do one of the following:

Action Description
Add a run

Add a non benchmarked run for comparison.

  1. Select the radio button for the required run.
  2. Click Compare.
Add a benchmark run

Add the benchmarked run for the load test for comparison.

Click Compare to benchmark.

For details on setting a run as a benchmark, see Test details.

Note: If you change the benchmark to a different run of the load test, any dashboard that already contains a comparison to the benchmark is updated to reflect the new benchmark run. If you do not want this to happen, click Compare with run id instead of Compare to benchmark. This will add the currently benchmarked run for comparison, but it will not be updated in the event of a benchmark change.

Clear a run

Remove the run currently selected for comparison.

Click Clear Selection.

Click Apply.

When a run has been added for comparison, the run number or is displayed next to the Compare button. Click the run number or the Benchmark button to re-open the dialog box to change your selection.

A Value, Break, and Anomaly column for the added run are displayed in the metrics pane, in which the values for the selected metrics, for a selected time point in the graph, are displayed.

Once added, you can toggle the added run to hide or show it by clicking .

Back to top

Metrics pane columns

The Metrics pane displays a hierarchical tree of the metrics available for the selected load test run. You can also display specific columns in the Metrics pane. To choose columns, click next to the run number at the top and select the columns you want to display:

Column Description
Value The value of the metrics you select to view, for a specific time selected in the graph. When you move the time picker in the graph, the value changes accordingly.
Break The percentage of SLA breaks for the metric.

The sum of the metric's anomaly ranks and in parenthesis, the number of anomalies.

The rank denotes the likelihood that the metric caused the anomaly and this is determined by three criteria:

  • Which data point was the earliest to deviate from the baseline?
  • How big was the deviation?
  • How long was the deviation?

The rank range is between 0-5.

Back to top

Graph display

The currently active graph is denoted by a blue border. Click on a graph to make it the currently active one. When you select or deselect metrics, they are added to, or removed from, the currently active graph only. The graph's title comprises the names of the graph's selected metrics.

Note: Transaction response time breakdown metrics are not added to the currently active graph, but are added as a separate graph.

The graphs are line graphs, with a line for each selected metric in the graph. The lines are color coded and you can see the color key in the Value column in the metrics pane, and in the legend displayed at the bottom of the graph.

SLA breaches are shown as a red shaded area and the SLA threshold is denoted by a horizontal, dashed line.

Anomalies are shown as an orange line and when you click on an anomaly, an orange shaded area is displayed. The intensity of the orange color is determined by the rank of the anomaly (the higher the rank, the stronger the color).

The x-axis of the graph shows the load test run's time line and the y-axis shows the metric values. If you have selected multiple runs in the dashboard, by default the time line is:

  • that of the longest run, if the selected runs do not include an active run.

  • that of the active run, if the selected runs include an active run.

Back to top

Customize graph views

You can configure the view for all graphs or you can customize graphs individually.

Customize all graph views

Use the following actions to customize the view for all graphs:

Action Description
Show/Hide Metrics tree

To show or hide the Metrics pane, click on the Show/Hide metrics tree toggle button .

Enable or disable tooltips You can enable or disable tooltips by toggling the Show points details button.
Select the graph scale

To change the scale of the graphs, select one of the following options from the drop-down menu at the top right of the dashboard:

  • No scale. No scale is used for the y-axis and shows actual values.
  • Scale. Uses a linear scale for the y-axis and shows actual values.
  • Log scale. Uses a logarithmic scale for the y-axis. This can be helpful when the data covers a large range of values, as it reduces the range to a more manageable size.

Note: Graphs for transaction response time breakdown metrics are not scaled.

Set a time mode

By default, the time mode used for all the graphs included in a dashboard tab is the test time. That is, the starting time of zero is the beginning of the test and the time frame is the elapsed time from the start of the test.

You can change the time mode to clock time so that the actual, local clock time is displayed instead.

To change between Test time and Clock time, select the required option from the Test time dropdown.


  • The time mode is persistent for the tab.
  • If you take a snapshot of a graph, the selected time mode is indicated in the snapshot.
Stop a test run

Note: This option is available for active runs only.

Click Stop Test to stop an active test run.

Customize individual graphs

To highlight a line in the graph, click on it. To highlight multiple lines, use Ctrl+click. The metrics for highlighted lines are also highlighted in the metrics pane.

Use the following actions to customize an individual graph's view:

Action Description
Show/Hide graph

You can hide or show the graph legend by toggling Show/Hide graph button .

Show/Hide legend

You can hide or show the graph legend by toggling the Legend button.

When the legend is displayed, you can:

  • Click a column name to sort the legend table by that column.
  • Click the Options menu icon next to a column name to display a list of legend options.
Choose columns

Expand the Columns dropdown to select the columns to show in the table. Click Esc to close the column selection box.

Add to report

You can create a snapshot for a dashboard graph and include it in the load test run's report.

To create a snapshot and add it to the report, click in the toolbar of the graph you want to add. The snapshot is automatically added as a new section in the report. You can add a maximum of 10 snapshots to a report. For details, see Report.


Enter text to search for a specific metric.

Add, remove, and reposition graphs

Use the following actions to add, remove, and reposition graphs:

Action Description

Split horizontal

Click to split the current graph into two. The left hand graph contains the original graph and the right hand graph is a new, empty graph.

To populate the new, empty graph, click it to make it active and then select the metrics you want to appear in it.

Split vertical

Click to split the current graph into two. The top graph contains the original graph and the bottom graph is a new, empty graph.

To populate the new, empty graph, click it to make it active and then select the metrics you want to appear in it.

Close a graph Click to close an open graph
Reposition a graph Click and drag a graph to reposition it in the dashboard view.

Display transaction summary data

You can display transaction summary data in a table using the following options. The table is displayed in its own pane, either in the current tab or in a new tab.

Display Action Outcome
Selected transactions for a run in an existing tab In the metrics pane, select the transactions you want to display. A new pane opens in the existing tab that includes a table showing summary data for the selected transactions.
All transactions for a run in an existing tab In the metrics pane, navigate to the Transactions > Summary node. Right-click Summary and then select Show > Transactions. A new pane opens in the existing tab that includes a table showing summary data for all the transactions included in the run.
All transactions for a run in a new tab Click the TRX Summary button. A new tab opens with the Transaction Summary table. It contains data from all transactions included in the run.
Note: Comparisons to other run and time picker settings are not applicable to this table.

Note: The calculation of the transactions per second in the Transaction Summary table, TPS (avg), differs from the TPS rate shown in the widgets (graphs). The Transaction Summary table shows the average transactions per second since the start of the test run, while the widget shows the transactions per second over time, with each data point representing the average at a given time.

Customize the time frame

Use the following actions to customize the time frame for the dashboard tab's graphs:

Action Description
View a specific time

Click within a graph to select a specific time in the run. The views of all the graphs are updated accordingly and the metric values displayed in the Value column in the metrics pane are also updated. The specific time selected is displayed in a time counter at the top left of the graph area.

Zoom to a specific frame

The time frame for which results are displayed is shown at the top of the graph. You can change the time range using one of these options:

  • Drag, stretch, or shrink the time slider.
  • Place the cursor in the graph area and drag it left or right to select a time range.
  • Manually change the start and end time counters located above the graphs.
Select a preset time frame

Note: This option is available for active runs only.

From the drop-down list to the right of the time slider, select a preset time frame to view. The time frame is the last selected number of minutes of the run.

Available options are 5, 15, 30, and 60 minutes. The default time frame is five minutes.

The graph is refreshed automatically every five seconds.

If you use the time slider to select a time frame, Manual appears in the preset box and the graph is no longer automatically refreshed.

Back to top

Configuration summary bar

A summary of the load test run configuration is displayed in a bar above the metrics pane and data graphs.

The following metrics are displayed in the configuration summary:

Metric Description
For a running test run:
Elapsed time The amount of time that the run has been active.
Running Vusers

The total number of virtual users (Vusers) either running in the test or in the initialization stage.

Throughput / sec The amount of data received from the server every second.
Transactions / sec The number of transactions completed per second.
Hits / sec The number of hits (HTTP requests) to the Web server per second.
Failed Vusers The number of Vusers that have stopped running or have stopped reporting measurements due to server or script error.
Failed transactions The number of transactions that did not successfully complete.
Passed transactions The number of transactions that have completed successfully.
Run Mode The test's run mode. For details, see Run mode.
Think Time An indicator showing whether think time was included or excluded from the transaction response time calculations. For details, see Exclude think time from TRT calculation.
For a completed test run:
Date & Time

The date and time the test run started.


The duration of the test run, including ramp up and tear down.

Planned Vusers The number of planned Vusers, by type, as configured in the load test.
Run mode The test's run mode. For details, see Run mode.

Back to top

Pause scheduling during a test run

You can configure a load test to enable you to pause the schedule while the test is running. This gives you more control over the test's workload.

You can pause the scheduling multiple times during a test run, but the total pause time cannot exceed the maximum time that you set in the General settings. To enable schedule pausing, select the load test's General settings > Pause scheduling up to check box. For details, see Pause scheduling.

  • To pause scheduling when viewing an active test run in the dashboard, click the Pause scheduling button on the toolbar. A counter displays the available time that the schedule can been paused during the run.
  • To resume, click .

In the test run's report, the Pause Scheduling table displays the pause and resume instances that occurred during the run. For details, see Report.

Back to top

Change the Vuser load dynamically

You can dynamically increase or decrease the number of running Vusers during the test run. In order to modify the number of running Vusers, make sure you have the Add Vusers option selected in the test's General settings. For details, see Add Vusers.

To change the number of running Vusers during a test run:

  1. Click the Change Load button .

  2. In the Change Load dialog box, enter the number of users that you want to assign to the displayed scripts. You cannot exceed the maximum number of users displayed.

  3. Click Apply.

When you remove Vusers, you are actually suspending them. If you later decide to add more Vusers, the suspended Vusers are added first.

If the script was created using VuGen, enable the Simulate a new user on each iteration option (Runtime Settings > Browser Emulation) to allow you to resume suspended Vusers during a test run.

Suspended Vusers will run a vuser_end() action at the conclusion of the test. This may cause unexpected behavior in the test results.

Note: You cannot change the load for scripts that contain rendezvous points.

Back to top

Runtime alerts

The Alerts section in the right-hand side of the summary bar, alerts you to problems that were found during the test run. These alerts are visible both during and after test runs. For each problem category, the number of issues is displayed. The number of unread notifications, if there are any, is further displayed in a red badge.

To view the problems, click on a category name. A horizontal ellipsis indicates that a dropdown is available. A new window opens with a list of all the problems in the category. By default, the list is sorted by the problem start time. You can sort the list by any of the metrics included in the list by clicking a column header.

The following problem categories are displayed in the Alert bar:

Category Description Additional actions
SLA Warnings Lists transactions that broke the defined SLA threshold.
  • Add. This adds the metric to the currently active graph in the dashboard.
  • Add to new pane. This creates a new graph in the dashboard and adds the metric to it.

Note: When you add a problem's metric to the dashboard (either in an existing graph or in a new graph), the dashboard's time frame is automatically adjusted to that of the problem's time frame. This affects all the graphs included in the dashboard.


Lists measurements that significantly changed their behavior.

Click > to open an anomaly and view all the relevant metrics it contains.

Script errors

Shows the number of errors encountered while running the scripts in the test.

Known limitations:

  • The number of snapshots generated per test is limited to 20.
  • For each unique error id, only one snapshot is generated.
  • Highlight an error and click Copy to place the error message on your clipboard.
  • For TruClient scripts, click Show snapshots to view a snapshot of the error.
  • Click the Add to Dashboard button to add the general error information as a table on the dashboard.
  • Click the Download all snapshots button to download all the error snapshots to a zip file. Each error snapshot is included as a separate file within the zip file.
LG Alerts

Lists scripts, for each load generator, that had any of the following problems:

  • Caused CPU or memory consumption to exceed 85% capacity for 60 seconds or more.
  • Caused the load generator's used disk space to reach 95% or higher.
  • Encountered connectivity problems.
Click the Add to Dashboard button to add this information as a new table on the dashboard.
Streaming Alerts

Lists the number of script errors encountered during a load test that could not be sent to Splunk.


  • This category is displayed only for load tests that have been configured to stream script errors to Splunk.
  • You can only view the number of alerts. You cannot click the number to display a list of the actual alerts.

Back to top

See also: