Reports

The report interface provides statistics and performance data for a test run.

Report overview

The Report page displays information about your load test run. Open the report in one of the following ways:

  • Go to the Load tests page, select a test, and go to the Runs Runs pane button pane. Select a run and click Report.

  • Open the Results page, and click the Report button "" in the row of the run whose report you want to generate.

By default, the Summary and Transactions report sections are displayed. For details, see Report sections.

Back to top

Set the report time frame

The time frame for which report results are displayed is shown above the report.

Report time frame banner

You can change the time range using one of these options:

  • Drag, stretch, or shrink the time slider.

  • Change the start and end time counters.

Back to top

Report sections

You can choose which sections to display in the report by selecting the checkbox for the section. In most tables, you can sort the lists by any column.

The following table shows the sections available in the report.

Section Description
Summary

The report summary provides an overview of the test run including duration, status, and important performance statistics such as average throughput.

Summary section data:

The following data is displayed in the Summary section:

  • Run ID: The run number.

  • Status: The status of the test.

    • Aborted. You stopped the test during initialization.

    • Stopped. You stopped the test and no SLAs were broken.

    • Failed. The test has ended and one or more SLAs were broken.

    • Halted. The test was stopped due to a security limitation.

    • Passed. The test has ended without any failed transactions.

    • System error. Unable to run the test due to a server error.

  • Duration: The duration of the test, including ramp up and tear down.

  • Run Mode: The test's run mode. For details, see Run mode.
  • Think Time: An indication whether think time was Included or Excluded in the TRT calculation. For details, see Exclude think time from TRT calculation.
  • Errors: The total number of errors.

  • LG Alerts: The total number of load generator alerts.

  • Date & Time: The date and time the test ended.
  • Percentile Algorithm: The algorithm used to calculate the percentile: Optimal or Average. For details, see Percentiles.
  • Percentile goal: The percentile goal used for calculating the compliance with the SLA. For details, see Create a load test.
  • Workload:

    • Average Throughput: The average amount of data received from the server per second.
    • Total Throughput: The total amount of data received during the test.
    • Average Hits Per Sec: The average number of hits (HTTP requests) on the Web server per second.
    • Total Hits Per Sec: The total number of hits (HTTP requests) on the Web server during the test.
  • Transactions:

    • Transactions: The total number of transactions that Passed or Failed.
    • TRT Breakers: The total number of SLAs that Passed or Failed.

  • Vusers:

    • Vuser type: The number of Vusers that ran the test. Each Vuser type is listed in a separate row.
    • Failed Vusers: The total number of Vusers that failed to run the test.
Top 10 Transactions

The following data is displayed for the 10 transactions with the highest number of passes:

Top 10 Transactions section data:

  • Script: The script in which the transaction is included.

  • Transaction: The transaction name.

  • SLA Status: The status of the SLA: Passed, Failed, or N/A (the transaction was not monitored).
  • TRT (Transaction Response Time)

    • Avg: The average time it took for a transaction to complete.

    • Min: The minimum time it took for a transaction to complete.

    • Max: The maximum time it took for a transaction to complete.

    • STD: The standard deviation for transaction response time.

  • TRT Percentile

    • Threshold: The transaction time threshold defined in Create a load test.

    • % Breakers: The percentage of transactions that broke the SLA threshold at any time in the test run.

    • 90th (default) or Xth: 90th (default) or Xth: The 90th (or Xth) percentile of the TRT (transaction response time) for the transaction.

      The column title is the percentile used in the trend calculations, with X defined in the Create a load test for the test run.

      For tests that ran with the optimal T-Digest algorithm, click the dropdown arrow to modify the percentile goal. Choose or manually enter a value from 0.1 to 99.9, optionally specifying one decimal place. Click Apply. The column header will show the new value and indicate Custom. When you change the percentile, the trend and benchmark values will change to reflect your new values. All other items in the report remain unchanged and are calculated according the run's original SLA. For details about the percentile algorithm, see Percentiles.

    • Run #<run_number>: The current run's trend. A down arrow indicates a reduction in the transaction time, while an up arrow indicates an increase in the transaction time.

      For example:

      Run 1: 90th percentile is 5 seconds, 90th percentile trend is 0.

      Run 2: 90 percentile is 6 seconds, 90th percentile trend is -0.20. (20% increase).

      Note: The grid will not show the Trend/Benchmark columns in several instances:

      For a T-Digest run:

      • Trend is not shown if all previous runs of same test used a different algorithm than the current run.
      • Benchmark is not shown if the benchmark run (when specified) used a different algorithm than the current run.

      For a non-T-Digest run:

      • Trend is not presented if all previous runs of same test used a different algorithm or SLA than the current run.
      • Benchmark is not shown if the benchmark run (when specified) used a different algorithm or SLA than the current run.
    • Benchmark (#<run_number>): If a benchmark run was set for the test, the current run is compared to the benchmark run and the value represents the deviation from the benchmark value. A down arrow indicates that the transaction time was less than the benchmark value. An up arrow indicates that the transaction exceeded the benchmark value. Note: If think time was excluded in the run, make sure that it is consistent with the trend/benchmark run settings, to ensure accurate results.
  • Passed/Failed TRX

    • Passed: The number of transactions that ended successfully without errors.

    • Failed: The number of transactions that failed to run until the end.

Passed and Failed Transactions

Passed and Failed Transaction graph example

This graph shows the following data throughout the test duration:

  • Passed transactions per second
  • Running Vusers
  • Failed transactions per second

The average and maximum values for these metrics are displayed in a table under the graph.

Transactions

The following data is displayed for the transactions in your test's scripts:

Transactions section data:

  • Transaction: The transaction name.

  • SLA Status: The status of the SLA: Passed, Failed, or N/A (the transaction was not monitored).
  • TRT (Transaction Response Time)

    • Avg: The average time it took for a transaction to complete.

    • Min: The minimum time it took for a transaction to complete.

    • Max: The maximum time it took for a transaction to complete.

    • STD: The standard deviation for transaction response time.

  • TRT Percentile

    • Threshold: The transaction time threshold defined in Create a load test.

    • % Breakers: The percentage of transactions that broke the SLA threshold at any time in the test run.

    • 90th (default) or Xth: The 90th (or Xth) percentile of the TRT (transaction response time) for the transaction.

      The column title is the percentile used in the trend calculations, with X defined in Create a load test for the test run.

      For tests that ran with the optimal T-Digest algorithm, click the dropdown arrow to modify the percentile goal. Choose or manually enter a value from 0.1 to 99.9, optionally specifying one decimal place. Click Apply. The column header shows the new value and indicates Custom. When you change the percentile, the trend and benchmark values change to reflect your new values. All other items in the report remain unchanged and are calculated according the run's original SLA. For details about the percentile algorithm, see Percentiles.

    • #<run_number>: The current run's trend. A green down arrow indicates a reduction in the transaction time, while a red up arrow indicates an increase in the transaction time.

      For example:

      Run 1: 90th percentile is 5 seconds, 90th percentile trend is 0.

      Run 2: 90 percentile is 6 seconds, 90th percentile trend is -0.20. (20% increase).

      Note: The grid does not show the Trend/Benchmark columns in several instances:

      For a T-Digest run:

      • Trend is not shown if all previous runs of same test used a different algorithm than the current run.
      • Benchmark is not shown if the benchmark run (when specified) used a different algorithm than the current run.

      For a non-T-Digest run:

      • Trend is not presented if all previous runs of same test used a different algorithm or SLA than the current run.
      • Benchmark is not shown if the benchmark run (when specified) used a different algorithm or SLA than the current run.
    • Benchmark (#<run_number>): If a benchmark run was set for the test, the current run is also compared to the benchmark run and the value represents the deviation from the benchmark value. A down arrow indicates that the transaction time was less than the benchmark value. An up arrow indicates that the transaction exceeded the benchmark value.
  • Passed/Failed TRX

    • Passed: The number of transactions that ended successfully without errors.

    • Failed: The number of transactions that failed to run until the end.

Errors per Second

Errors per second graph example

This graph shows the errors per second.

The average and maximum values for these metrics are displayed in a table under the graph.

Errors Summary

A list of errors encountered by Vusers during the load test run.

Errors Summary columns:

The following data is displayed for errors:

  • Script: The name of the script.

  • Error ID: Error ID number.

  • Error message: Message description.

  • Error count: The number of errors that occurred during the run of the test.

Throughput and Hits per second

Throughput and Hits per second graph example

This graph shows the following data throughout the test duration:

  • Hits per second
  • Running Vusers
  • Throughput

The average and maximum values for these metrics are displayed in a table under the graph.

Scripts

This grid lists all the scripts that were included in the test definition, and their configuration.

Distribution This table shows how the test Vusers were distributed over load generators during the test run.
Monitors

This table shows resource usage for server and application Monitors that were configured.

For each metric, the table shows its average and maximum value.

LG health monitoring

This table shows system health and resource usage for both on-premises and cloud-based load generators.

The system health metrics include CPU, Disk (on-premises only), and Memory.

HTTP Responses

The table lists the HTTP responses received during the test run and shows the total number received for each response code.

Note: The average number of HTTP responses is an estimate based on the selected time frame.

OPLG Scripts

The table lists which scripts ran on which on-premises load generators.

The OPLG Scripts table shows:

  • Load Generator: The name of the on-premises load generator.

  • Script: The name of the script.

  • Vusers: The number of Vusers that ran the script.

OPLG Errors

The table lists the top three errors for each script that occurred on on-premises load generators during the test run.

Note: This section is only available for scripts run on on-premises load generators version 3.3 or later.

On-premises Load Generator Errors columns:

The following data is displayed for errors:

  • Script: The name of the script.

  • Error ID: Error ID number.

  • Error message: Message description.

  • Error count: The number of errors that occurred during the run of the test.

Pause Scheduling

The table lists the pause and resume schedule actions that occurred during the test run. For information about pausing a test, see Pause scheduling up to.

Pause Scheduling columns:

The following data is displayed for pause scheduling:

  • Status: The status of the schedule—progressing or paused.

  • Start time: The start time of the status.

  • Duration: The duration of the status.

  • Triggered by: The user that triggered the status.

  • Message: Any relevant message or additional information.

This table is only visible when Pause Scheduling was enabled in the Run configurations.

Back to top

Percentiles

A percentile is a statistical tool that marks the value into which a percentage of participants in a group falls. In the context of test runs, if you check against the 90th percentile, you are looking to see which tests performed better than 90% of tests that participated in the run.

The following algorithms are commonly used to calculate percentiles.

Algorithm Description
Average percentile over time This method calculates the average percentile over time—not the percentile for the duration of the test. The percentile is calculated per interval (by default 5 seconds). The dashboard and report show an aggregate of the percentile goals.
Optimal percentile (T-Digest) An optimal (not average) percentile, based on the T-Digest algorithm introduced by Ted Dunning as described in the T-Digest abstract. This algorithm is capable of calculating percentile goals for large raw data sets, making it ideal for calculating percentiles for the large data sets generated by OpenText Core Performance Engineering.

Algorithm usage

For the Report:

  • Tenants created before May 2020 (version 2020.05):

    • Tests created before version 2020.10 (prior to October 2020), by default use the Average percentile algorithm to calculate the percentile.
    • Tests created with version 2020.10 or later (from October 2020), by default use the Optimal percentile (T-Digest) algorithm.
    • You can turn on the Optimal percentile (T-Digest) switch.
  • Tenants created in May 2020 and later (from version 2020.05) by default use the Optimal percentile (T-Digest) algorithm to calculate percentiles in all tests.

Note: For the Dashboard, the Average percentile over time algorithm is used.

Percentile calculation

Typically, percentiles are calculated on raw data, based on a simple process which stores all values in a sorted array. To find a percentile, you access the corresponding array component. For example, to retrieve the 50th percentile in a sorted array, you find the value of my_array[count(my_array) * 0.5]. This approach is not scalable because the sorted array grows linearly with the number of values in the data set. Therefore, percentiles for large data sets are commonly calculated using algorithms that find approximate percentiles based on the aggregation of the raw data.

The Optimal Percentile method is used, based on an algorithm known as T-Digest. This method was introduced by Ted Dunning in the Computing Accurate Quantiles using T-Digest abstract.

Recalculate the percentile

After a test run, you can recalculate the percentile for the transaction response time.

In the Transactions or Top 10 Transactions sections, expand the Percentile dropdown, select a new percentile from 1 through 99, and click Apply. The values in the grid change accordingly. For details, see Report sections.

Differences in calculations between OpenText Performance Engineering products

If you have similar tests running in OpenText Core Performance Engineering and OpenText Professional Performance Engineering, or OpenText Enterprise Performance Engineering, you may notice a difference in the response times, as represented by the percentile. These differences can be a result of the following:

  • OpenText Professional Performance Engineering, or OpenText Enterprise Performance Engineering calculate the percentiles based on raw data. To do a fair comparison, make sure that you run your OpenText Core Performance Engineering test with the Optimal T-Digest algorithm.
  • Think time setting should be consistent in your test runs. If tests in other products include think time, while the same test in OpenText Core Performance Engineering excludes think time, you may see differences in the calculated percentile.
  • Even if your test on OpenText Core Performance Engineering runs with the Optimal percentile, for small sets of TRT (transaction response time) values, you may encounter discrepancies. Typically this happens when T-Digest outputs results outside of the actual dataset, by performing average between values in the dataset, whereas other algorithms may also output values from inside the dataset.

    The following table shows the percentile calculation based on T-Digest using a small set of values.

    TRT values Percentile Value
    1,2,3,4,5,6,7,8,9,10 90th 9.5
    1,2,3,4,5,6,7,8,9,10 95th 10
    1,2,3,4,5,6,7,8,9,100 90th 54.5
    1,1,2,3 50th 1.5

Back to top

Add a dashboard graph custom snapshot

Save a snapshot of a customized graph in the dashboard and display it in the report for the run.

Add a snapshot from the Dashboard

  1. Go to the new dashboard for a specific run.
  2. Customize a graph with the data you want to view.
  3. Click the Add to report button Add to report button in the toolbar of the graph to save the snapshot. The snapshot is added as a section in the report.

For details on working in the dashboard, see Dashboard.

Note: You can add a maximum of 10 snapshots to a report.

Configure a snapshot in the report

Use the following actions to configure a snapshot in the report.

Action Task
Rename a custom snapshot
  1. Double-click the snapshot name.
  2. Edit the text.
Exclude a snapshot from the report Deselect the custom snapshot in the sections list.
Remove a custom snapshot Click the X at the top of a custom snapshot in the report viewer.

Back to top

Add notes

When viewing a report, you can add notes to an entire report or to individual report sections.

To add a note to an entire report:

  1. In the Results tab, hover over the run whose report you want to edit.

  2. In the run's row, click the More options button More options button.

  3. Click Add note. In the Add note dialog box, enter text and click Save.

  4. To edit a saved note, click More options button More options button > Edit note.

To add a note to a specific section in a report:

  1. From the Results tab, click the Report button Report button to open a run's report.

  2. At the bottom of the section for which you want to add the note, click Add note.

  3. Add your note and press Enter. When you export or email the report, the note is included.

Back to top

Create report templates

The Reports window lets you save and load report templates. This capability lets you configure a report layout and save it as a template for future use. Using templates prevents you from having to reconfigure the same layout for each run report.

The report templates are stored as assets. To view the templates that are available for your tenant, go to Assets > Templates. For details, see Templates and Report template demos.

The template name is shown above the list of the report sections. If there are differences between your current configuration and the template, they are indicated by an asterisk and a slight notation above the ellipsis.

To save a layout as a template:

  1. Configure your report to show the sections that interest you.
  2. Click the Report templates button Report templates button on the top of the sections list.
  3. In the Template management menu, select Save as.

To load a template for your report:

  1. Click the Report templates button Report templates buttonon the top of the sections list.
  2. In the Template management menu, select Select template.

  3. Choose a template, and click Select.

    Select template dialog box

Template actions

Click the Template Management buttonTemplate Management  button to perform one of the following actions:

  • To discard the differences between your layout and only use the layout of the chosen template, select Discard changes.

  • To overwrite the selected template by applying the current layout to the selected template, select Save changes.

  • To save the current layout as a new template, select Save as.

Back to top

Update template with transaction changes

You can modify an existing report template to reflect transaction changes in your test scenario. This includes:

  • Removing empty metrics from the template. For example, transaction data from the original test that was used to build the template.

  • Adding additional transactions to a template. For example, if you created a snapshot for a dashboard graph for the current run, and you want to include this in the template.

  1. Go to the dashboard for the specific run.

  2. To delete the empty metrics from the original script, click Delete empty metrics above the metrics pane.

    Delete empty matrix

  3. Customize the graph with the data you want to view, and click the Add to report button Add to report button in the graph's toolbar.

  4. Add sections for the new scenario to the template.

    1. Open the Report tab, select the sections that you want to apply to the template, and click Add to template.

      Add to template example

    2. In the Template Management menu (at the top of the Sections list), click the Report templates buttonReport templates button, and select Select template.

    3. Choose a template and click Select to load a template for your report.

    4. In the Template Management menu, click the Report templates buttonReport templates button and select Save changes to apply the current layout to the selected template.

Back to top

Exporting and sharing

For details on exporting report results in different formats and sending them by email, see Run results actions.

Back to top

See also: