Dashboard actions and alerts

This topic describes the actions you can perform via the Dashboard during runtime, and the LG alerts displayed in the dashboard.

Pause scheduling during a test run

You can configure a load test to enable you to pause the schedule while the test is running. This gives you more control over the test's workload.

You can pause the scheduling multiple times during a test run, but the total pause time cannot exceed the maximum time that you set in the General settings. To enable schedule pausing, before running the test, select the load test's General settings > Pause scheduling up to check box. For details, see Pause scheduling.

  • To pause scheduling when viewing an active test run in the dashboard, click the Pause scheduling button on the toolbar. A counter displays the available time that the schedule can been paused during the run.
  • To resume, click .

In the test run's report, the Pause Scheduling table displays the pause and resume instances that occurred during the run. For details, see Report.

Back to top

Change the Vuser load dynamically

You can dynamically increase or decrease the number of running Vusers during the test run. In order to modify the number of running Vusers, make sure you have the Add Vusers option selected in the test's General settings. For details, see Add Vusers.

To change the number of running Vusers during a test run:

  1. Click the Change Load button .

  2. In the Change Load dialog box, enter the number of users that you want to assign to the displayed scripts. You cannot exceed the maximum number of users displayed.

  3. Click Apply.

When you remove Vusers, you are actually suspending them. If you later decide to add more Vusers, the suspended Vusers are added first.

If the script was created using VuGen, enable the Simulate a new user on each iteration option (Runtime Settings > Browser Emulation) to allow you to resume suspended Vusers during a test run.

Suspended Vusers will run a vuser_end() action at the conclusion of the test. This may cause unexpected behavior in the test results.

Note: You cannot change the load for scripts that contain rendezvous points.

Back to top

Runtime alerts

The Alerts section in the right-hand side of the summary bar, alerts you to problems that were found during the test run. These alerts are visible both during and after test runs. For each problem category, the number of issues is displayed. The number of unread notifications, if there are any, is further displayed in a red badge.

To view the problems, click on a category name. A horizontal ellipsis indicates that a dropdown is available. A new window opens with a list of all the problems in the category. By default, the list is sorted by the problem start time. You can sort the list by any of the metrics included in the list by clicking a column header.

The following problem categories are displayed in the Alert bar:

Category Description Additional actions
SLA Warnings Lists transactions that broke the defined SLA threshold.
  • Add. This adds the metric to the currently active graph in the dashboard.
  • Add to new pane. This creates a new graph in the dashboard and adds the metric to it.

Note: When you add a problem's metric to the dashboard (either in an existing graph or in a new graph), the dashboard's time frame is automatically adjusted to that of the problem's time frame. This affects all the graphs included in the dashboard.

Anomalies

Lists measurements that significantly changed their behavior.

Click > to open an anomaly and view all the relevant metrics it contains.

Script errors

Shows the number of errors encountered while running the scripts in the test.

Known limitations:

  • The number of snapshots generated per test is limited to 20.
  • For each unique error id, only one snapshot is generated.
  • Highlight an error and click Copy to place the error message on your clipboard.
  • For TruClient scripts, click Show snapshots to view a snapshot of the error.
  • Click the Add to Dashboard button to add the general error information as a table on the dashboard.
  • Click the Download all snapshots button to download all the error snapshots to a zip file. Each error snapshot is included as a separate file within the zip file.
LG Alerts

Lists scripts, for each load generator, that had any of the following problems:

  • Caused CPU or memory consumption to exceed 85% capacity for 60 seconds or more.
  • Caused the load generator's used disk space to reach 95% or higher.
  • Encountered connectivity problems.
Click the Add to Dashboard button to add this information as a new table on the dashboard.
Streaming Alerts

Lists the number of script errors encountered during a load test that could not be sent to Splunk.

Note:

  • This category is displayed only for load tests that have been configured to stream script errors to Splunk.
  • You can only view the number of alerts. You cannot click the number to display a list of the actual alerts.
 

Back to top

See also: