Trend reports

LoadRunner Enterprise trend reports enable you to compare performance test run data over time, thereby giving you better visibility and control of your application's performance.

Overview

A trend report is a feature that allows you to view changes in performance from one performance test to another, or across several performance tests. By analyzing these changes, you can easily identify improvements or regressions in the measurement’s performance.

You can define a trend report, or you can configure automatic trending. Automatic trending makes the process simpler and more efficient by automating the collate, analyze, or publish steps, so that you do not have to wait for each of these steps to finish before continuing with the next one.

For details, see Define a trend report and Configure automatic trending.

Comparison methods to identify trends

By comparing the same measurement in more than one instance of a test run, you can identify whether its performance trend is improving or regressing.

For example, if you were interested in the performance trend of the transaction response time measurement, the trend report clearly displays whether over several instances of a test run, this value has increased or decreased from one run to another - a performance regression or improvement respectively.

There are two methods of comparing measurements contained in a performance test run for identifying performance trends: Compare to Baseline and Compare to Previous.

Comparison Method

Description

Compare to Baseline

You select one performance test run in the trend report and define it as the baseline. All measurements in the report are then compared to the measurements contained in the baseline.

Compare to Previous

All measurements in a performance test are compared to the measurements in the performance test run that immediately precedes it in the report.

It is important to understand the difference between the two comparison methods. The following example illustrates how the same data can yield different results depending on the method you select.

As shown in the table below, the average Transaction Response Time measurement is being trended from four performance test runs: 3, 4, 5, and 6.

Transaction Response Time (Compare to baseline)

Name Type Average
run 3 [Base] run 4 run 5 run 6
All TRT 4.567 1.22 (-73.29%) 2.32 (-49.2%) 12.455 (+172.72%)
TRX_01 TRT 2.045 4.073 (+99.17%) 2.035 (-0.49%) 1.05 (-48.66%)
TRX_02 TRT 1.045 2.07 (+98.09%) 1.015 (-2.87%) 1.051 (+0.57%)
TRX_03 TRT 3.053 3.067 (+0.46%) 2.009 (-34.2%) 2.654 (-13.07%)
TRX_04 TRT 6.055 6.868 (+13.43%) 5.011 (-17.24%) 7.05 (+16.43%)

Performance test run (PT) 3 has been defined as the baseline (as indicated by the word Base in parentheses). The average transaction response times contained in the other performance test runs are compared to PT 3 only.

In PT 3, the average transaction response time for TRX_01 was 2.045. The average transaction response time for the same transaction in PT 5 was 2.035, which represents a slightly faster response time and therefore a slight improvement in the performance of this measurement. The percentage difference between the two figures is displayed in brackets, in this case -0.49%.

However, if the Compare to Previous comparison method was selected, then the average transaction response time in PT 5 would be compared not to PT 3, but rather to PT 4 (because 4 precedes it in the table). The value for PT 4 is 4.073 while for PT 5 it's 2.035, a percentage difference of -50.04%.

Transaction Response Time (Compare to previous run)

Name Type Average
run 3 [Base] run 4 run 5 run 6
All TRT 4.567 1.22 (-73,29%) 2.3 (+90.16%) 12.455 (+436.85%)
TRX_01 TRT 2.045 4.073 (+99.17%) 2.035 (-50.04%) 1.05 (-48.4%)
TRX_02 TRT 1.045 2.07 (+98.09%) 1.015 (-50.97%) 1.051 (+3.55%)
TRX_03 TRT 3.053 3.067 (+0.46%) 2.009 (-34.5%) 2.654 (+32.11%)
TRX_04 TRT 6.055 6.868 (+13.43%) 5.011 (-27.04%) 7.05 (+40.69%)

Using exactly the same data, the two comparison methods have yielded very different results. Only a slight improvement with the Compare to Baseline method (-0.49%), while a more significant improvement with the Compare to Previous method (-50.04%).

For task details, see View and manage test runs in a trend report.

Back to top

Trend thresholds

To identify significant improvements or regressions in performance, you can define unique thresholds to track differentials between measurements being compared. If a differential exceeds a defined threshold, that value appears in a predetermined color, identifying it as an improvement, minor regression, or major regression.

For example, if you define an improvement threshold for comparing transaction response times as 50%, then any transaction response time that is 50% lower than that of the baseline or previous run (depending on the comparison method) is displayed in the color you defined for improvements.

In the example below, the following performance thresholds for the transaction response time (TRT) measurement have been defined:

  • Improvement. At least 90% decrease

  • Major Regression. At least 50% increase

These threshold definitions mean that any performance improvements or regressions which exceeds these percentages is displayed in color, making them more identifiable.

In the following table, the Compare to Previous comparison method is used.

Transaction Response Time (Compare to previous run)

Name Type Average
run 3 [Base] run 4 run 5
Action_Transaction TRT 0.002 0.94 (+46900%) 0 (-100%)
All TRT 0.002 0.311 (+15450%) 0 (-100%)

In the table above, the value of the TRT measurement for the Action_Transaction in test run 4 is 46900% higher than in test run 3 - a performance regression which far exceeds the defined threshold for major regressions. Therefore, the value appears in red, the default color for major regressions.

The corresponding value for test run 5 represents a 100% improvement on test run 4. Since this percentage exceeds the defined threshold for improvements, the value appears in green, the default color for improvements.

For task details, see Define trend thresholds.

Back to top

Custom measurement mapping

The Custom Measurement Mapping feature allows you to reconcile inconsistent transaction or monitor names between performance test runs, thereby allowing you to properly trend these measurements.

The following are two examples of when you would use the Custom Measurement Mapping feature:

For task details, see Define and customize mapped measurements.

For details on transaction names, see Transaction naming conventions.

Measurement acronyms

The following table lists all the measurement acronyms that might be used in the trend report:

Data Type

Full Name

Initials

Vusers

Running Vusers

VU

Errors

Errors

ERR

Transactions

Transaction Response Time

TRT

Transaction Per Second

TPS

Transaction Summary

TRS

Web Resources

Hits per Second

Throughput

Connections

WEB

User Defined Data Points

User Defined Data Points

UDP

System Resources

Windows Resources

WIN

UNIX Resources

UNX

Server Resources

SRVR

SNMP

SNMP

SiteScope

SiS

Web Server Resources

Apache

APA

MS IIS

IIS

Web Application Server Resources

MS ASP

ASP

Database Server Resources

Oracle

ORA

MS SQL

SQL

J2EE

Server Request

J2EE

.NET

Server Request

NET

Additional Components

COM+

COM

.NET

NET

Application Deployment Solution

Citrix MetaFrame XP

CTRX

Back to top

See also: