Dashboard

LoadRunner Cloud's new dashboard enables you to analyze the performance of a selected run for a load test.

Tip: Hover over a data point to display a tooltip that shows the metric name, the value, and the ID run.

Where do I find it?

Use one of the following methods:

  • Navigate to the Load Tests page, click on a test, and navigate to the Runs pane. Click on a run.

  • Open the Results page and click (more options) in the row of the selected test run. Click Dashboard.

Note: If the New Dashboard and Report toggle is not switched on, move the toggle to ON. For details about the old dashboard, see Former dashboard.

Back to top

Dashboard views

A dashboard view is contained in a tab. You can create up to 20 tabs for a load test run, with each tab showing a different dashboard.

For example, for run number 10 of a load test:

  • Tab 1 can show metrics A, B, and C and compare the selected run to another run (number 11).
  • Tab 2 can show metrics X, Y, and Z and compare the selected run to a different run (number 12).

The tabs are persistent for the current user and for the specific load test. This means that once you create a dashboard for a load test run, any tabs you create (both the initial tab and any additional ones) are applicable for all runs of the same load test.

Tab controls are located at the bottom of the dashboard. To create and work with tabs, do one of the following:

Action Method
Create a tab
  • Right-click an existing tab and select New tab.

  • Click .

The new tab that is created contains metrics for the latest run of the selected load test. If there is an active run of the load test (that is, it is currently running) then that is considered the latest run.

Rename a tab Right-click a tab and select Rename.
Duplicate a tab Right-click a tab and select Duplicate. The name of the new, duplicated tab is the same as the original tab name with the addition of -1. For example, if the original tab name is MyTab, the duplicated tab name is MyTab-1.
Close a tab
  • Right-click a tab and select Close.
  • Click the X next to the tab name.

Note:

  • A closed tab cannot be re-opened.

  • There must be at least one tab open.

A dashboard view comprises two panes. The left pane shows which metrics you are seeing and provides various configuration options. The right pane shows the actual data in multiple graphs.

Tip: Click Transactions Summary in the top of the dashboard to open a new tab with that summary of the transactions. To return to the metric graphs, click its tab in the bottom of the dashboard.

Metrics pane

The metrics pane displays a hierarchical tree of the metrics available for the selected load test run. You can display the following columns by clicking next to the run number at the top and selecting the column you want to display:

Column Description
Value The value of the metrics you select to view, for a specific time selected in the graph. When you move the time picker in the graph, the value changes accordingly.
Break The percentage of SLA breaks for the metric.
Anomaly

The sum of the metric's anomaly ranks and in parenthesis, the number of anomalies.

The rank denotes the likelihood that the metric caused the anomaly and this is determined by three criteria:

  • Which data point was the earliest to deviate from the baseline?
  • How big was the deviation?
  • How long was the deviation?

The rank range is between 0-5.

You can configure load test runs and metrics.

Select a load test run

To change the run displayed in the dashboard, navigate to the project node in the masthead to exit the dashboard. Open the Runs pane and select a different run.

Compare two load test runs

You can compare the selected run to another run of the same load test. The relevant metrics for the additional run are added to all the graphs in the dashboard.

To add a run for comparison, click and in the dialog box that opens do one of the following:

Action Description
Add a run

Add a non benchmarked run for comparison.

  1. Select the radio button for the required run.
  2. Click Compare.
Add a benchmark run

Add the benchmarked run for the load test for comparison.

Click Compare to benchmark.

For details on setting a run as a benchmark, see Test details.

Note: If you change the benchmark to a different run of the load test, any dashboard that already contains a comparison to the benchmark is updated to reflect the new benchmark run. If you do not want this to happen, click Compare with run id instead of Compare to benchmark. This will add the currently benchmarked run for comparison, but it will not be updated in the event of a benchmark change.

Clear a run

Remove the run currently selected for comparison.

Click Clear Selection.

Click Apply.

When a run has been added for comparison, the run number or is displayed next to the Compare button. Click the run number or the Benchmark button to re-open the dialog box to change your selection.

A Value, Break, and Anomaly column for the added run are displayed in the metrics pane, in which the values for the selected metrics, for a selected time point in the graph, are displayed.

Once added, you can toggle the added run to hide or show it by clicking .

Metrics

By default, the first tab in the dashboard for a load test includes the following metrics and a graph is displayed for each one:

  • Running Vusers

  • Hits per second

  • Throughput

  • Errors

When you create additional dashboard tabs for the load test, only one graph is included without any selected metrics.

Metrics that are selected for the dashboard are denoted by , if the metric is included in the currently active graph, or by , if the metric is included in another graph.

Metrics names that are grayed out and written in italics (both in the metrics pane and in graph and table legends) are metrics that have been carried over from a previous run (dashboard layout and configuration are persistent between different runs of the same load test), but for which there is no data. The lack of data for a metric may be permanent (for example, due to a change in a script or an on-premises load generator no longer being available), or temporary due to the fact that data has not yet been generated for the metric in the current run. You can clear these empty metrics from the dashboard (both from the metrics pane and from all the graphs and tables in which they are included) as follows:

  • Clear a single empty metric

    Click the to the left of the metric name in the metrics pane.

  • Clear all empty metrics

    To clear all the empty metrics, do one of the following:

    • Right-click a metric name and from the menu displayed, select Clear empty metrics.

    • Click . If there are no empty metrics, this option is not displayed.

Metrics that breach a configured SLA, or have anomalies, are further denoted as follows:

Icon Description
An SLA was breached, but the breach did not cause the load test run to fail.
An SLA was breached and the breach caused the load test run to fail.
An anomaly was detected.

A metric that has both an SLA breach and an anomaly (or a parent metric that includes both an SLA breach and an anomaly for its children) is denoted by .

The percentage of SLA breaks for the metric is displayed in the Break column and the anomaly rank and number of anomalies is displayed in the Anomaly column.

Use the following actions to configure the metrics you want to include in the dashboard.

Note:  

  • When you select or deselect metrics, they are only added to or removed from the currently active graph.
  • Transaction response time breakdown metrics are not added to the currently active graph. Instead, each such metric is displayed in its own graph. You cannot add other metrics to these graphs.
  • You can include a combined total of 1,000 metrics in all of the graphs in the dashboard.
Action Description
Select or deselect a metric for viewing

To select a metric, click the empty cell to the left of the metric name. The icon denotes that a metric is selected.

To deselect a metric, click the icon next to the metric name.

To hide all metrics, click Hide all above the metrics tree.

Select or deselect sub-metrics by type

Right-click a metric name to display a menu showing the number of sub-metrics by type. For example, click a script name to show the number of related transactions, locations, emulations, and so forth. Select a metric type and then click Show or Hide to show or hide all the metrics of that type under the parent metric.

Note: When hiding sub-metrics, you can also select All to hide all the metrics under the parent metric, regardless of their type.

Search for a metric Enter a string in the search box and click the magnifying glass icon. The metrics whose name contains the string are displayed in the tree.
Open or close a metric Click the arrow next to a metric name to open or close the tree for that specific metric.
Open all selected metrics Click Expand visible to open the tree for all selected metrics.
Resize value columns

Click next to the run number at the top of a Value column and select:

  • Autosize This Column (for one column)
  • Autosize All Columns (for all columns)
Highlight metrics

To highlight a metric, click on it. To highlight multiple metrics, use Ctrl+click or Shift+click.

The data lines for the selected metrics are also highlighted in the data graph.

The number of selected metrics is displayed at the bottom of the metrics pane.

Back to top

Data graphs

By default, four graphs are opened, one for each of the following metrics:

  • Running Vusers

  • Hits per second

  • Throughput

  • Errors

The currently active graph is denoted by a blue border. Click on a graph to make it the currently active one. When you select or deselect metrics, they are added to, or removed from, the currently active graph only. The graph's title comprises the names of the graph's selected metrics.

Note: Transaction response time breakdown metrics are not added to the currently active graph, but are added as a separate graph.

The graphs are line graphs, with a line for each selected metric in the graph. The lines are color coded and you can see the color key in the Value column in the metrics pane, and in the legend displayed at the bottom of the graph.

SLA breaches are shown as a red shaded area and the SLA threshold is denoted by a horizontal, dashed line.

Anomalies are shown as an orange line and when you click on an anomaly, an orange shaded area is displayed. The intensity of the orange color is determined by the rank of the anomaly (the higher the rank, the stronger the color).

The x-axis of the graph shows the load test run's time line and the y-axis shows the metric values. If you have selected multiple runs in the dashboard, by default the time line is:

  • that of the longest run, if the selected runs do not include an active run.

  • that of the active run, if the selected runs include an active run.

Customize all graph views

Use the following actions to customize the view for all graphs:

Action Description
Show/Hide Metrics tree

To show or hide the Metrics pane, click on the Show/Hide metrics tree toggle button .

Enable or disable tooltips You can enable or disable tooltips by toggling the Show points details button.
Select the graph scale

To change the scale of the graphs, select one of the following options from the drop-down menu at the top right of the dashboard:

  • No scale. No scale is used for the y-axis and shows actual values.
  • Scale. Uses a linear scale for the y-axis and shows actual values.
  • Log scale. Uses a logarithmic scale for the y-axis. This can be helpful when the data covers a large range of values, as it reduces the range to a more manageable size.

Note: Graphs for transaction response time breakdown metrics are not scaled.

Set a time mode

By default, the time mode used for all the graphs included in a dashboard tab is the test time. That is, the starting time of zero is the beginning of the test and the time frame is the elapsed time from the start of the test.

You can change the time mode to clock time so that the actual, local clock time is displayed instead.

To change between Test time and Clock time, select the required option from the Test time dropdown.

Note:

  • The time mode is persistent for the tab.
  • If you take a snapshot of a graph, the selected time mode is indicated in the snapshot.
Stop a test run

Note: This option is available for active runs only.

Click Stop Test to stop an active test run.

Customize individual graphs

To highlight a line in the graph, click on it. To highlight multiple lines, use Ctrl+click. The metrics for highlighted lines are also highlighted in the metrics pane.

Use the following actions to customize an individual graph's view:

Action Description
Show/Hide graph

You can hide or show the graph legend by toggling Show/Hide graph button .

Show/Hide legend

You can hide or show the graph legend by toggling the Legend button.

When the legend is displayed, you can:

  • Click a column name to sort the legend table by that column.
  • Click the Options menu icon next to a column name to display a list of legend options.
Choose columns

Expand the Choose columns dropdown to select the columns to show in the table. Click Esc to close the column selection box.

Add to report

You can create a snapshot for a dashboard graph and include it in the load test run's report.

To create a snapshot and add it to the report, click in the toolbar of the graph you want to add. The snapshot is automatically added as a new section in the report. You can add a maximum of 10 snapshots to a report. For details, see Report.

Search

Enter text to search for specific metric.

Add, remove, and reposition graphs

Use the following actions to add, remove, and reposition graphs:

Action Description

Split horizontal

Click to split the current graph into two. The left hand graph contains the original graph and the right hand graph is a new, empty graph.

To populate the new, empty graph, click it to make it active and then select the metrics you want to appear in it.

Split vertical

Click to split the current graph into two. The top graph contains the original graph and the bottom graph is a new, empty graph.

To populate the new, empty graph, click it to make it active and then select the metrics you want to appear in it.

Close a graph Click to close an open graph
Reposition a graph Click and drag a graph to reposition it in the dashboard view.

Display transaction summary data

You can display transaction summary data in a table using the following options. The table is displayed in its own pane, either in the current tab or in a new tab.

Display Action Outcome
Selected transactions for a run in an existing tab In the metrics pane, select the transactions you want to display. A new pane opens in the existing tab that includes a table showing summary data for the selected transactions.
All transactions for a run in an existing tab In the metrics pane, right-click Summary and then select Show > Transactions. A new pane opens in the existing tab that includes a table showing summary data for all the transactions included in the run.
All transactions for a run in a new tab Click . A new tab is created with one pane that includes a table showing summary data for all the transactions included in the run.

Note: Comparisons to another run and time picker settings are not applicable to the Transactions Summary table.

Customize the time frame

Use the following actions to customize the time frame for the dashboard tab's graphs:

Action Description
View a specific time

Click within a graph to select a specific time in the run. The views of all the graphs are updated accordingly and the metric values displayed in the Value column in the metrics pane are also updated. The specific time selected is displayed in a time counter at the top left of the graph area.

Zoom to a specific frame

The time frame for which results are displayed is shown at the top of the graph. You can change the time range using one of these options:

  • Drag, stretch, or shrink the time slider.
  • Place the cursor in the graph area and drag it left or right to select a time range.
  • Manually change the start and end time counters located above the graphs.
Select a preset time frame

Note: This option is available for active runs only.

From the drop-down list to the right of the time slider, select a preset time frame to view. The time frame is the last selected number of minutes of the run.

Available options are 5, 15, 30, and 60 minutes. The default time frame is five minutes.

The graph is refreshed automatically every five seconds.

If you use the time slider to select a time frame, Manual appears in the preset box and the graph is no longer automatically refreshed.

Back to top

Configuration summary bar

A summary of the load test run configuration is displayed in a bar above the metrics pane and data graphs.

The following metrics are displayed in the configuration summary:

Metric Description
For a running test run:
Elapsed time The amount of time that the run has been active.
Running VusersClosedA virtual user emulates user actions in your application. Define how many Vusers will perform the business process recorded in your script during the load test.

The total number of Vusers either running in the test or in the initialization stage.

Throughput / sec The amount of data received from the server every second.
Transactions / sec The number of transactions completed per second.
Hits / sec The number of hits (HTTP requests) to the Web server per second.
Failed Vusers The number of Vusers that have stopped running or have stopped reporting measurements due to server or script error.
Failed transactions The number of transactions that did not successfully complete.
Passed transactions The number of transactions that have completed successfully.
Run Mode The test's run mode. For details, see Run mode.
Think Time An indicator showing whether think time was included or excluded from the transaction response time calculations. For details, see Exclude think time from TRT calculation.
For a completed test run:
Date & Time

The date and time the test run started.

Duration

The duration of the test run, including ramp up and tear down.

Planned Vusers The number of planned Vusers, by type, as configured in the load test.
Run mode The test's run mode. For details, see Run mode.

Back to top

Pause scheduling during a test run

You can configure a load test to enable you to pause the schedule while the test is running. This gives you more control over the test's workload.

You can pause the scheduling multiple times during a test run, but the total pause time cannot exceed the maximum time that you set in the General settings. To enable schedule pausing, select the load test's General settings > Pause scheduling up to check box. For details, see Pause scheduling.

  • To pause scheduling when viewing an active test run in the dashboard, click the Pause scheduling button on the toolbar. A counter displays the available time that the schedule can been paused during the run.
  • To resume, click .

In the test run's report, the Pause Scheduling table displays the pause and resume instances that occurred during the run. For details, see Report.

Back to top

Change the Vuser load dynamically

You can dynamically increase or decrease the number of running Vusers during the test run. In order to modify the number of running Vusers, make sure you have the Add Vusers option selected in the test's General settings. For details, see Add Vusers.

To change the number of running Vusers during a test run:

  1. Click to open the Change Load dialog box.

  2. For any script displayed, changed the number of current users to the new number you want. You cannot exceed the maximum number of users displayed.

    Example: For any of the scripts listed you can enter any number between 0-20.

  3. Click Apply.

When you remove Vusers, you are actually suspending them. If you later decide to add more Vusers, the suspended Vusers are added first.

If the script was created using VuGen, enable the Simulate a new user on each iteration option (Runtime Settings > Browser Emulation) to allow you to resume suspended Vusers during a test run.

Suspended Vusers will run a vuser_end() action at the conclusion of the test. This may cause unexpected behavior in the test results.

Note: You cannot change the load for scripts that contain rendezvous points.

Back to top

Alert bar

The Alert bar, located above the data graphs on the right-hand side, alerts you to the problems that were found during the test run. For each problem category, the number of issues is displayed. The number of unread notifications, if there are any, is further displayed in a red badge.

To view the problems, click on a category name. A new window opens with a list of all the problems in the category. By default, the list is sorted by the problem start time, but you can sort the list by any of the metrics included in the list by clicking the relevant column header.

The following problem categories are displayed in the Alert bar:

Category Description Additional actions
SLA Warnings Lists transactions that broke the defined SLA threshold.
  • Add. This adds the metric to the currently active graph in the dashboard.
  • Add to new pane. This creates a new graph in the dashboard and adds the metric to it.

Note: When you add a problem's metric to the dashboard (either in an existing graph or in a new graph), the dashboard's time frame is automatically adjusted to that of the problem's time frame. This affects all the graphs included in the dashboard.

Anomalies

Lists measurements that significantly changed their behavior.

Click > to open an anomaly and view all the relevant metrics it contains.

LG Alerts

Lists scripts, for each load generator, that had any of the following problems:

  • Caused CPU or memory consumption to exceed 85% capacity for 60 seconds or more.
  • Caused the load generator's used disk space to reach 95% or higher.
  • Encountered connectivity problems.
Click to add this information as a new table on the dashboard.
Streaming Alerts

Lists the number of script errors encountered during a load test that could not be sent to Splunk.

Note:

  • This category is displayed only for load tests that have been configured to stream script errors to Splunk.
  • You can only view the number of alerts and cannot click the number to display a list of the actual alerts.
 
Script errors

Shows the number of errors encountered while running the scripts in the test.

Known limitations:

  • The number of snapshots generated per test is limited to 20.
  • For each unique error id, only one snapshot is generated.
  • Highlight an error and click to copy the error message to your clipboard.
  • For TruClient scripts, click to view a snapshot of the error.
  • Click to add the general error information as a table on the dashboard.
  • Click to download all the error snapshots to a zip file. Each error snapshot is included as a separate file within the zip file.

Back to top

See also: