Dashboard metrics
This topic describes commonly used metrics in the OpenText Core Performance Engineering Dashboard.
Common metrics
The metrics pane provides graphs for the following common client measurements.
For information about the transaction summary table, see Display transaction summary data.
Configuration summary bar
A summary of the load test run configuration is displayed in a bar above the metrics pane and data graphs.
For a running test
The following metrics are displayed in the configuration summary bar for a test that is currently running.
Metric name | Description |
---|---|
Elapsed time | The amount of time that the run has been active. |
Running Vusers |
The total number of virtual users (Vusers) either running in the test or in the initialization stage. |
Throughput / sec | The amount of data received from the server every second. |
TRX / sec | The number of transactions completed per second. |
Hits / sec | The number of hits (HTTP requests) to the Web server per second. |
Failed Vusers | The number of Vusers that have stopped running or have stopped reporting measurements due to server or script error. |
Failed TRX | The number of transactions that did not successfully complete. |
Passed TRX | The number of transactions that have completed successfully. |
Run Mode | The test's run mode. For details, see Run mode. |
Think Time | An indicator showing whether think time was included or excluded from the transaction response time calculations. For details, see Exclude think time from TRT calculation. |
Errors and warning | SLA Warnings, Anomalies, Errors, and LG alerts. |
For a completed test run
The following items are displayed in the configuration summary bar for a test that has finished running.
Metric name | Description |
---|---|
Date & Time |
The date and time the test run started. |
Duration |
The duration of the test run, including ramp up and tear down. |
Planned Vusers | The number of planned Vusers, by type, as configured in the load test. |
Run mode | The test's run mode. For details, see Run mode. |
Think Time | Whether or not think time was included in the measurements: Included/Excluded. For details, see Exclude think time from TRT calculation. |
Errors and warning | SLA Warnings, Anomalies, Errors, and LG alerts. |
User data points
The following metrics are available for user data points.
Metric name | Description |
---|---|
Data Points (Sum) | The sum of the recorded data point values. This displays the sum of the values for user-defined data points throughout the load test scenario run. |
Data Points (Average) | The average of the recorded data point values. This displays the average values that were recorded for user-defined data points during the load test scenario run. |
Application and server monitors
The dashboard provides graphs for application and server monitors that were configured to monitor the test run such as SiteScope, New Relic, Dynatrace, and AppDynamics. The metrics are shown under the Monitors node in the metrics pane. For details, see Monitors.
Load generator health monitoring
The metrics tree's LG health monitoring node shows system health metrics for both on-premises and cloud-based load generators.
The system health metrics include CPU, Disk (on-premises only), and Memory.
You can customize the columns to display as you would with other metrics. For example, you can show the minimum, maximum, and average values for each metric.
By default, health monitoring is enabled for on-premises load generators and inactive for cloud load generators.
To enable health monitoring for cloud load generators:
-
Submit a service request requesting this capability.
-
Select the Enable health monitoring of cloud load generators option in the load test's Test settings. For details, see Load generators.
Note:
-
For Windows-based on-premises load generators, you can monitor common Windows performance measurements, such as LogicalDisk\Free space or Memory\Page Faults/sec. The metrics are shown under the Monitors node. This capability will be deprecated in future releases.
-
The aggregation interval for system health metrics is 15 seconds even if a smaller aggregation interval was configured for the test run. For details, see Results aggregation interval. To achieve better accuracy, the average of multiple samples in each interval is used.
TruClient metrics
For load tests that include TruClient protocol scripts, the following client-side performance graphs are displayed.
Metric name | Description |
---|---|
DOM interactive |
The average time per document, in milliseconds, since the browser has finished parsing all of the HTML and DOM construction is complete. |
Page DOMContentLoaded event duration | The average time per document, in milliseconds, it takes for the DOMContentLoaded event to occur (a point when the document been completely loaded and parsed without waiting for stylesheets, images, and subframes to finish loading). |
Page load event duration |
The average time per document, in milliseconds, it takes for the load event to occur (the point at which the whole page has loaded, including all dependent resources such as stylesheets and images). |
For details about the graphs, see Client-side measurements.
Note: The Client-Side Measurement Breakdown graph is not available in OpenText Core Performance Engineering. For a list of measurements supported in this graph, see TruClient performance metrics.
Kafka metrics
For load tests that include Kafka protocol scripts, the following graphs are displayed.
Metric name | Description |
---|---|
Producer message rate |
The number of messages that the producer published per second. |
Consumer message rate | The number of messages that the consumer received per second. |
Producer throughput |
The amount of data (in bytes) that the producer published. |
Consumer throughput | The amount of data (in bytes) that the consumer received. |
Producer send duration |
The time between a message sent and a confirmation received when the producer sends a message to broker. |
Disruption events
For load tests that include disruption events, you can add disruption event data to a Dashboard graph. The disruption event data shows when an event was active and helps you see how other metrics were affected by it. For details, see Disruption events.
See also: