Dashboard metrics

This topic describes the commonly used metrics in the OpenText Core Performance Engineering Dashboard.

Common metrics

The metrics pane provides graphs for the following common client measurements:

Metric name Description What this tells me about my test
Running Vusers

The number of running Vusers during the test.

The capacity of your test.

Failed Vusers The number of Vusers that terminated unexpectedly during the test. How many virtual users terminate unexpectedly. Typically this value should be 0 or near 0. A high value indicates a problem in the load test. In such instances, we recommend that you check the script and the environment.
Hits Per Second

The number of hits (HTTP requests) to the Web server per second.

The workload level on the server.
Throughput

The amount of data received from the server every second, in bytes.

How the volume of data affects performance. Compare this graph to the Transaction Response Time graph to see how the throughput affected transaction performance.
Errors The number of errors encountered by Vusers during the run of the test. Performance problems. A large number of errors during a specific time period of the test can indicate a performance problem with the application.
Errors per second The number of errors encountered by Vusers per second during the run of the test. The rate of errors during your test.
Connections The number of open TCP/IP connections at each point of time in the load test scenario. This graph is useful in indicating when additional connections are needed. For example, if the number of connections reaches a plateau and the transaction response time increases sharply, adding connections would probably cause a dramatic improvement in performance (a reduction in the transaction response time).
Transactions Details about the test's transactions

The Transactions node includes the following sub nodes: Summary, TRT (Transaction Response Time) 90th percentile, TRT [AVG], TRT [MIN], TRT [MAX], Total transactions, Passed transactions, Failed transactions, Total transactions per second, Passed transactions per second, Failed transactions per second, and SLA breakers.

Tip: Choose one of the sub-nodes to display the values per transaction, script, or test.

For information about the transaction summary table, see Display transaction summary data.

Tip: We recommend using the TruClient protocol to view Web page load metrics, end-to-end transaction response time including browser rendering time, and other client-side metrics. For details, see TruClient performance metrics in the VuGen Help Center.

Back to top

Configuration summary bar

A summary of the load test run configuration is displayed in a bar above the metrics pane and data graphs.

Load test summary

For a running test

The following metrics are displayed in the configuration summary bar for a test that is currently running:

Metric name Description
Elapsed time The amount of time that the run has been active.
Running Vusers

The total number of virtual users (Vusers) either running in the test or in the initialization stage.

Throughput / sec The amount of data received from the server every second.
TRX / sec The number of transactions completed per second.
Hits / sec The number of hits (HTTP requests) to the Web server per second.
Failed Vusers The number of Vusers that have stopped running or have stopped reporting measurements due to server or script error.
Failed TRX The number of transactions that did not successfully complete.
Passed TRX The number of transactions that have completed successfully.
Run Mode The test's run mode. For details, see Run mode.
Think Time An indicator showing whether think time was included or excluded from the transaction response time calculations. For details, see Exclude think time from TRT calculation.
Errors and warning SLA Warnings, Anomalies, Errors, and LG alerts.

For a completed test run

The following items are displayed in the configuration summary bar for a test that has finished running:

Metric name Description
Date & Time

The date and time the test run started.

Duration

The duration of the test run, including ramp up and tear down.

Planned Vusers The number of planned Vusers, by type, as configured in the load test.
Run mode The test's run mode. For details, see Run mode.
Think Time Whether or not think time was included in the measurements: Included/Excluded. For details, see Exclude think time from TRT calculation.
Errors and warning SLA Warnings, Anomalies, Errors, and LG alerts.

Back to top

User data points

The following metrics are available for user data points:

Metric name Description
Data Points (Sum) The sum of the recorded data point values. This displays the sum of the values for user-defined data points throughout the load test scenario run.
Data Points (Average) The average of the recorded data point values. This displays the average values that were recorded for user-defined data points during the load test scenario run.

Back to top

Application and server monitors

The dashboard provides graphs for application and server monitors that were configured to monitor the test run such as SiteScope, New Relic, Dynatrace, and AppDynamics. The metrics are shown under the Monitors node in the metrics pane. For details, see Monitors.

Load generator health monitoring

The metrics tree's LG health monitoring node shows system health metrics for both on-premises and cloud-based load generators.

The system health metrics include CPU, Disk (on-premises only), and Memory.

You can customize the columns to display as you would with other metrics. For example, you can show the minimum, maximum, and average values for each metric.

By default, health monitoring is enabled for on-premises load generators and inactive for cloud load generators.

To enable health monitoring for cloud load generators:

  1. Submit a service request requesting this capability.

  2. Select the Enable health monitoring of cloud load generators option in the load test's Test settings. For details, see Load generator settings.

Note:

  • For Windows-based on-premises load generators, you can monitor common Windows performance measurements, such as LogicalDisk\Free space or Memory\Page Faults/sec. The metrics are shown under the Monitors node. This capability will be deprecated in future releases.

  • The aggregation interval for system health metrics is 15 seconds even if a smaller aggregation interval was configured for the test run. For details, see Results aggregation interval. To achieve better accuracy, the average of multiple samples in each interval is used.

Back to top

Kafka metrics

For load tests that include Kafka protocol scripts, the following graphs are displayed:

Metric name Description
Producer message rate

The number of messages that the producer published per second.

Consumer message rate The number of messages that the consumer received per second.
Producer throughput

The amount of data (in bytes) that the producer published.

Consumer throughput The amount of data (in bytes) that the consumer received.
Producer send duration

The time between a message sent and a confirmation received when the producer sends a message to broker.

Back to top

Disruption events

For load tests that include disruption events, you can add disruption event data to a Dashboard graph. The disruption event data shows when an event was active and helps you see how other metrics were affected by it. For details, see Disruption events.

Back to top

See also: