Understanding your results
LoadRunner Cloud allows you to see the results of your test runs in the graph dashboard or in a report view.
What is the dashboard?
This section describes the new dashboard, as shown by default in the LoadRunner Cloud Dashboard tab.
The new dashboard shows more information than the dashboard from earlier versions of LoadRunner Cloud. To view the older dashboard, set the toggle to OFF. For details, see Former dashboard.
The LoadRunner Cloud Dashboard tab displays graphs that can help you analyze the performance of a selected run for a load test.
The Dashboard tab is comprised of two panes.
- The left pane displays a list of metrics that you can choose to display.
- The right pane shows the actual graphs.
You can also set the scale and test duration that you want to view.
For details, see Dashboard.
What is the report view?
This section describes the new report, as shown by default in the LoadRunner Cloud Report tab. The new report shows more information than reports in earlier versions of LoadRunner Cloud. To view the older report, set the toggle to OFF. For details, see Former report.
LoadRunner Cloud's Report tab shows statistics and performance data for test runs.
The Report tab is comprised of two panes.
- The left pane displays a list of built-in reports that you can choose to display.
- The right pane shows the actual reports.
You can also order and sort the results.
The available reports are: Summary, Top 10 Transactions, Passed & failed Transactions per second, Errors per second, Errors Summary, Throughput & Hits per second, Scripts, Distribution, and HTTP Responses.
For details, see Report.
Service level agreements (SLAs) are specific goals that you define for your load test run. After a test run, LoadRunner Cloud compares these goals against performance related data that was gathered and stored during the course of the run, and determines whether the SLA passed or failed.
In LoadRunner Cloud you set the following goals:
|Percentile||The percentage of transactions expected to complete successfully, by default 90%.|
|TRT (transaction response time) seconds percentile||
The expected time it takes the specified percent of transactions to complete (by default 3 seconds). If the percent of completed transaction exceeds the SLA's value, the SLA will be considered "broken".
For example, assume you have a transaction named "Login" in your script, where the 90th percentile TRT (seconds) value is set to 2.50 seconds. If more than 10% of successful transactions that ended during a 5 second window have a response greater than 2.50 seconds, the SLA will be considered "broken". It will show a Failed status. For details about how the percentiles are calculated, see Percentiles.
|Percent of allowed failed transactions||The percent of allowed failed transactions, by default 10%. If the percent of failed transaction exceeds the SLA's value, the SLA will be considered "broken" and will show a Fail status.|
You can set a separate SLA for each of your runs. You can also select multiple SLAs at one time and use bulk actions to apply one or more actions to the set. For details, see Configure SLAs.
A percentile is a statistical tool that marks the value into which a given percentage of participants in a group falls. In the context of test runs, if you check against a 90th percentile, you are looking to see which tests performed better than 90% of tests that participated in the run.
There following algorithms are commonly used to calculate percentiles:
|Average percentile over time||This method calculates the average percentile over time—not the percentile for the duration of the test. The percentile is calculated per interval (by default 5 seconds). The dashboard and report show an aggregate of the percentile values.|
|Optimal percentile (T-Digest)||An optimal (not average) percentile, based on the T-Digest algorithm introduced by Ted Dunning as described in the T-Digest abstract. This algorithm is capable of calculating percentile values for large raw data sets, making it ideal for calculating percentiles for the large data sets generated by LoadRunner Cloud.|
LoadRunner Cloud algorithm usage
For the Dashboard, LoadRunner Cloud uses the Average percentile over time algorithm.
For the Report:
Tenants created before May 2020 (version 2020.05):
- Tests created before version 2020.10 (prior to October 2020), by default use the Average percentile algorithm to calculate the percentile.
- Tests created with version 2020.10 or later (from October 2020), by default use the Optimal percentile (T-Digest) algorithm.
- You can toggle the Optimal percentile (T-Digest) option on and off as described in Optimal percentile (T-Digest).
Tenants created in May 2020 and later (from version 2020.05) by default use the Optimal percentile (T-Digest) algorithm to calculate percentiles in all tests.
Typically, percentiles are calculated on raw data, based on a simple process which stores all values in a sorted array. To find a percentile, you access the corresponding array component. For example, to retrieve the 50th percentile in a sorted array, you find the value of
my_array[count(my_array) * 0.5]. This approach is not scalable since the sorted array grows linearly with the number of values in the data set. Therefore, percentiles for large data sets are commonly calculated using algorithms that find approximate percentiles based on the aggregation of the raw data.
LoadRunner Cloud uses the Optimal Percentile method, based on an algorithm known as T-Digest. This method was introduced by Ted Dunning in the Computing Accurate Quantiles using T-Digest abstract.
Differences between percentile calculations between LoadRunner family products
If you have similar tests running in LoadRunner Cloud and LRP or LRE, you may notice a difference in the response times, as shown by the percentile. This differences can be a result of the following:
- LRP and LRE calculate the percentiles based on raw data. To do a fair comparison, make sure that you run your test in LoadRunner Cloud with the Optimal T-Digest algorithm.
- Think time setting should be consistent in your test runs. If tests in LRP or LRE include think time, while the same test excludes think time in LoadRunner Cloud, you may see differences in the calculated percentile.
Even if your test on LRC runs with the Optimal percentile, for small sets of TRT (transaction response time) values you may encounter discrepancies. Typically this happens when T-Digest outputs results outside of the actual dataset, by performing average between values in the dataset, whereas other algorithms may also output values from inside the dataset.
The following table shows the percentile calculation based on T-Digest using a small set of values:
TRT values Percentile Value 1,2,3,4,5,6,7,8,9,10 90th 9.5 1,2,3,4,5,6,7,8,9,10 95th 10 1,2,3,4,5,6,7,8,9,100 90th 54.5 1,1,2,3 50th 1.5