Trend reports
LoadRunner Enterprise trend reports enable you to compare performance test run data over time, thereby giving you better visibility and control of your application's performance.
Overview
A trend report is a feature that allows you to view changes in performance from one performance test to another, or across several performance tests. By analyzing these changes, you can easily identify improvements or regressions in the measurement’s performance.
You can define a trend report, or you can configure automatic trending. Automatic trending makes the process simpler and more efficient by automating the collate, analyze, or publish steps, so that you do not have to wait for each of these steps to finish before continuing with the next one.
For details, see Define a trend report and Configure automatic trending.
Comparison methods to identify trends
By comparing the same measurement in more than one instance of a test run, you can identify whether its performance trend is improving or regressing.
For example, if you were interested in the performance trend of the transaction response time measurement, the trend report clearly displays whether over several instances of a test run, this value has increased or decreased from one run to another - a performance regression or improvement respectively.
There are two methods of comparing measurements contained in a performance test run for identifying performance trends: Compare to Baseline and Compare to Previous.
Comparison Method |
Description |
---|---|
Compare to Baseline |
You select one performance test run in the trend report and define it as the baseline. All measurements in the report are then compared to the measurements contained in the baseline.
|
Compare to Previous |
All measurements in a performance test are compared to the measurements in the performance test run that immediately precedes it in the report.
|
It is important to understand the difference between the two comparison methods. The following example illustrates how the same data can yield different results depending on the method you select.
As shown in the table below, the average Transaction Response Time measurement is being trended from four performance test runs: 3, 4, 5, and 6.
Transaction Response Time (Compare to baseline)
Name | Type | Average | |||
---|---|---|---|---|---|
run 3 [Base] | run 4 | run 5 | run 6 | ||
All | TRT | 4.567 | 1.22 (-73.29%) | 2.32 (-49.2%) | 12.455 (+172.72%) |
TRX_01 | TRT | 2.045 | 4.073 (+99.17%) | 2.035 (-0.49%) | 1.05 (-48.66%) |
TRX_02 | TRT | 1.045 | 2.07 (+98.09%) | 1.015 (-2.87%) | 1.051 (+0.57%) |
TRX_03 | TRT | 3.053 | 3.067 (+0.46%) | 2.009 (-34.2%) | 2.654 (-13.07%) |
TRX_04 | TRT | 6.055 | 6.868 (+13.43%) | 5.011 (-17.24%) | 7.05 (+16.43%) |
Performance test run (PT) 3 has been defined as the baseline (as indicated by the word Base in parentheses). The average transaction response times contained in the other performance test runs are compared to PT 3 only.
In PT 3, the average transaction response time for TRX_01 was 2.045. The average transaction response time for the same transaction in PT 5 was 2.035, which represents a slightly faster response time and therefore a slight improvement in the performance of this measurement. The percentage difference between the two figures is displayed in brackets, in this case -0.49%.
However, if the Compare to Previous comparison method was selected, then the average transaction response time in PT 5 would be compared not to PT 3, but rather to PT 4 (because 4 precedes it in the table). The value for PT 4 is 4.073 while for PT 5 it's 2.035, a percentage difference of -50.04%.
Transaction Response Time (Compare to previous run)
Name | Type | Average | |||
---|---|---|---|---|---|
run 3 [Base] | run 4 | run 5 | run 6 | ||
All | TRT | 4.567 | 1.22 (-73,29%) | 2.3 (+90.16%) | 12.455 (+436.85%) |
TRX_01 | TRT | 2.045 | 4.073 (+99.17%) | 2.035 (-50.04%) | 1.05 (-48.4%) |
TRX_02 | TRT | 1.045 | 2.07 (+98.09%) | 1.015 (-50.97%) | 1.051 (+3.55%) |
TRX_03 | TRT | 3.053 | 3.067 (+0.46%) | 2.009 (-34.5%) | 2.654 (+32.11%) |
TRX_04 | TRT | 6.055 | 6.868 (+13.43%) | 5.011 (-27.04%) | 7.05 (+40.69%) |
Using exactly the same data, the two comparison methods have yielded different results. Only a slight improvement with the Compare to Baseline method (-0.49%), while a more significant improvement with the Compare to Previous method (-50.04%).
For task details, see View and manage test runs in a trend report.
Trend thresholds
To identify significant improvements or regressions in performance, you can define unique thresholds to track differentials between measurements being compared. If a differential exceeds a defined threshold, that value appears in a predetermined color, identifying it as an improvement, minor regression, or major regression.
For example, if you define an improvement threshold for comparing transaction response times as 50%, then any transaction response time that is 50% lower than that of the baseline or previous run (depending on the comparison method) is displayed in the color you defined for improvements.
In the example below, the following performance thresholds for the transaction response time (TRT) measurement have been defined:
-
Improvement. At least 90% decrease
-
Major Regression. At least 50% increase
These threshold definitions mean that any performance improvements or regressions which exceeds these percentages is displayed in color, making them more identifiable.
In the following table, the Compare to Previous comparison method is used.
Transaction Response Time (Compare to previous run)
Name | Type | Average | |||
---|---|---|---|---|---|
run 3 [Base] | run 4 | run 5 | |||
Action_Transaction | TRT | 0.002 | 0.94 (+46900%) | 0 (-100%) | |
All | TRT | 0.002 | 0.311 (+15450%) | 0 (-100%) |
In the table above, the value of the TRT measurement for the Action_Transaction in test run 4 is 46900% higher than in test run 3 - a performance regression which far exceeds the defined threshold for major regressions. Therefore, the value appears in red, the default color for major regressions.
The corresponding value for test run 5 represents a 100% improvement on test run 4. Since this percentage exceeds the defined threshold for improvements, the value appears in green, the default color for improvements.
For task details, see Define trend thresholds.
Custom measurement mapping
The Custom Measurement Mapping feature allows you to reconcile inconsistent transaction or monitor names between performance test runs, thereby allowing you to properly trend these measurements.
The following are two examples of when you would use the Custom Measurement Mapping feature:
You run a performance test that contains the transaction BuyBook. A while later you run the performance test again. However, in the time between the two performance test runs, the transaction name has been changed to TRX_01_BuyBook.
As a result of this inconsistent naming, you cannot obtain any trending information for this measurement, because LoadRunner Enterprise cannot recognize that the two transactions are the same, and compare them for trending purposes.
To overcome this problem, you map the two measurements (BuyBook and TRX_01_BuyBook) to a new third measurement which you create, for example Buy_Book_mapped. You add this new user-defined measurement to the trend report. LoadRunner Enterprise can then compare two instances of the Buy_Book_mapped transaction and give you meaningful trending information.
You can give the new transaction the same name as one of the current transactions. Additionally, you can configure the mapping so that all future instances of the transaction are automatically mapped to the new transaction name.
You want to compare your application's performance when it runs on different operating systems or when it runs on different Web/application servers.
You run the performance test once on a Windows platform, and then again on a Linux platform. You then want to compare the CPU utilization between the two runs. However, each platform provides a different name for this measurement. For example, % Processor Time (Processor_Total) in Windows, and CPU Utilization in Linux.
LoadRunner Enterprise cannot successfully obtain trending information for this measurement because the measurement names may be different.
To overcome this problem, you map the two measurements (% Processor Time (Processor_Total) and CPU Utilization) to a third measurement which you create, for example CPU_mapped. You then add this new user-defined measurement to the trend report. LoadRunner Enterprise can then compare the two instances of the CPU_mapped transaction and give you meaningful trending information.
You can give the new monitor the same name as one of the current monitors. Additionally, you can configure the mapping so that all future instances of the monitor are automatically mapped to the new monitor name.
For task details, see Define and customize mapped measurements.
For details on transaction names, see Transaction naming conventions.
Measurement acronyms
The following table lists all the measurement acronyms that might be used in the trend report:
Data Type |
Full Name |
Initials |
---|---|---|
Vusers |
Running Vusers |
VU |
Errors |
Errors |
ERR |
Transactions |
Transaction Response Time |
TRT |
Transaction Per Second |
TPS |
|
Transaction Summary |
TRS |
|
Web Resources |
Hits per Second Throughput Connections |
WEB |
User Defined Data Points |
User Defined Data Points |
UDP |
System Resources |
Windows Resources |
WIN |
UNIX Resources |
UNX |
|
Server Resources |
SRVR |
|
SNMP |
SNMP |
|
SiteScope |
SiS |
|
Web Server Resources |
Apache |
APA |
MS IIS |
IIS |
|
Web Application Server Resources |
MS ASP |
ASP |
Database Server Resources |
Oracle |
ORA |
MS SQL |
SQL |
|
J2EE |
Server Request |
J2EE |
.NET |
Server Request |
NET |
Additional Components |
COM+ |
COM |
.NET |
NET |
|
Application Deployment Solution |
Citrix MetaFrame XP |
CTRX |
See also: