LoadRunner Cloud's new report provides statistics and performance data for a test run.

Where do I find it?

  1. Navigate to the Load Tests tab, select a test, and click to open the Runs page.


    Navigate to the Results page.

  2. For a selected test run, click (more options) and select Report.

  3. Switch on the New Dashboard & Report toggle.

Note: If the New Dashboard and Report toggle is not switched on, the old report is displayed. For details on the old report, see Former report.

Back to top

Set the time frame to display in the report

The time frame for which report results are displayed is shown above the report.

You can change the time range using one of these options:

  • Drag, stretch, or shrink the time slider.

  • Change the start and end time counters.

Back to top

Report sections

You can choose which sections to display in the report by selecting the checkbox for a section in the sections list on the left. In most tables, you can sort the lists by any column.

The following table shows the various sections available in the report.

Section Description

The report summary provides an overview of the test run including duration, status, and important performance statistics such as average throughput.

Top 10 Transactions

The following data is displayed for the top 10 transactions:

Passed and Failed Transactions

This graph shows the following data throughout the test duration:

  • Passed transactions per second
  • Running Vusers
  • Failed transactions per second

The average and maximum values for these metrics are displayed in a table under the graph.


The following data is displayed for the transactions in your test's scripts:

Errors per Second

This graph shows the script errors per second.

The average and maximum values for these metrics are displayed in a table under the graph.

Errors Summary

A list of errors encountered by Vusers during the load test run.

Throughput and Hits per second

This graph shows the following data throughout the test duration:

  • Hits per second
  • Running Vusers
  • Throughput

The average and maximum values for these metrics are displayed in a table under the graph.


The Scripts section lists all the scripts that were included in the test definition, and their configuration.

Distribution The Scripts Distribution section displays how the test Vusers were distributed geographically during the test run.
On-premises Load Generator Scripts

The table lists which scripts run on which on-premises load generators.

The On-premises Load Generator Scripts table shows:

  • Load Generator. The name of the on-premises Load Generator.

  • Script. The name of the script.

  • Vusers. The number of Vusers that ran the script.

On-premises Load Generator Script Errors

The table lists the top three errors for each script that occurred on on-premises load generators during the test run.

Note: This section is only available for scripts run on on-premises load generators version 3.3 or later.

HTTP Responses The table lists the HTTP responses received during the test run and shows the total number received for each response code.
Pause Scheduling

The table lists the pause and resume schedule actions that occurred during the test run. For information about pausing a test, see Pause scheduling.

Back to top


A percentile is a statistical tool that marks the value into which a given percentage of participants in a group falls. In the context of test runs, if you check against the 90th percentile, you are looking to see which tests performed better than 90% of tests that participated in the run.

The following algorithms are commonly used to calculate percentiles:

Algorithm Description
Average percentile over time This method calculates the average percentile over time—not the percentile for the duration of the test. The percentile is calculated per interval (by default 5 seconds). The dashboard and report show an aggregate of the percentile values.
Optimal percentile (T-Digest) An optimal (not average) percentile, based on the T-Digest algorithm introduced by Ted Dunning as described in the T-Digest abstract. This algorithm is capable of calculating percentile values for large raw data sets, making it ideal for calculating percentiles for the large data sets generated by LoadRunner Cloud.

LoadRunner Cloud algorithm usage

For the Report:

  • Tenants created before May 2020 (version 2020.05):

    • Tests created before version 2020.10 (prior to October 2020), by default use the Average percentile algorithm to calculate the percentile.
    • Tests created with version 2020.10 or later (from October 2020), by default use the Optimal percentile (T-Digest) algorithm.
    • You can toggle the Optimal percentile (T-Digest) option on and off as described in Optimal percentile (T-Digest).
  • Tenants created in May 2020 and later (from version 2020.05) by default use the Optimal percentile (T-Digest) algorithm to calculate percentiles in all tests.

Note: For the Dashboard, LoadRunner Cloud uses the Average percentile over time algorithm.

Percentile calculation

Typically, percentiles are calculated on raw data, based on a simple process which stores all values in a sorted array. To find a percentile, you access the corresponding array component. For example, to retrieve the 50th percentile in a sorted array, you find the value of my_array[count(my_array) * 0.5]. This approach is not scalable since the sorted array grows linearly with the number of values in the data set. Therefore, percentiles for large data sets are commonly calculated using algorithms that find approximate percentiles based on the aggregation of the raw data.

LoadRunner Cloud uses the Optimal Percentile method, based on an algorithm known as T-Digest. This method was introduced by Ted Dunning in the Computing Accurate Quantiles using T-Digest abstract.

Differences between percentile calculations between LoadRunner family products

If you have similar tests running in LoadRunner Cloud and LRP or LRE, you may notice a difference in the response times, as shown by the percentile. This differences can be a result of the following:

  • LRP and LRE calculate the percentiles based on raw data. To do a fair comparison, make sure that you run your test in LoadRunner Cloud with the Optimal T-Digest algorithm.
  • Think time setting should be consistent in your test runs. If tests in LRP or LRE include think time, while the same test excludes think time in LoadRunner Cloud, you may see differences in the calculated percentile.
  • Even if your test on LRC runs with the Optimal percentile, for small sets of TRT (transaction response time) values you may encounter discrepancies. Typically this happens when T-Digest outputs results outside of the actual dataset, by performing average between values in the dataset, whereas other algorithms may also output values from inside the dataset.

    The following table shows the percentile calculation based on T-Digest using a small set of values:

    TRT values Percentile Value
    1,2,3,4,5,6,7,8,9,10 90th 9.5
    1,2,3,4,5,6,7,8,9,10 95th 10
    1,2,3,4,5,6,7,8,9,100 90th 54.5
    1,1,2,3 50th 1.5

Back to top

Add a custom snapshot of a dashboard graph to the report

Save a snapshot of a customized graph in the dashboard and display it in the report for the run.

Add a snapshot from the Dashboard

  1. Navigate to the new dashboard for a specific run.
  2. Customize a graph with the data you want to view.
  3. Click in the graph header to save the snapshot. The snapshot is added as a section in the report.

For details on working in the dashboard, see Dashboard.

Note: You can add a maximum of 10 snapshots to a report.

Configure a snapshot in the report

Use the following actions to configure a snapshot in the report:

Action How to
Rename a custom snapshot
  1. Double-click the snapshot name.
  2. Edit the text.
Exclude a snapshot from the report Deselect the custom snapshot in the sections list on the left.
Remove a custom snapshot Click the X at the top of a custom snapshot in the report viewer.

Back to top

Add notes

When viewing a report, you can add notes to the individual report sections.

To add a note to a specific report section, click ADD NOTE at the bottom of the section. In the dialog box that opens, enter your note and save it.

Back to top

Using report templates (beta)

The Reports window lets you save and load report templates. This capability lets you configure a report layout and save it as a template for future use. This prevents you from having to reconfigure the same layout for each run report.

The report templates are stored as assets of LoadRunner Cloud. To view the templates that are available for your tenant, navigate to Assets > Templates. For details, see Templates (beta).

The template name is shown above the list of the report sections. If there are differences between your current configuration and the template, they will be indicated by an asterisk and a yellow notation above the ellipsis.

To save a layout as a template:

  1. Configure your report to show the sections that interests you.
  2. Click the vertical ellipsis on the top of the sections list.
  3. In the Template management menu, select Save as.

To load a template for your report:

  1. Click the vertical ellipsis on the top of the sections list.
  2. In the Template management menu, select Choose template.

  3. Choose a template, and click Select.

Template actions

Perform an action using the Template Management menu .

  • To discard the differences between your layout and only use the layout of the chosen template, select Discard changes.
  • To overwrite the selected template by applying the current layout to the selected template, select Save changes.
  • To save the current layout as a new template, select Save as.

Back to top

Export report results

For details on exporting report results in different formats and for sending report results by email, see Run results actions.

Back to top

See also: