Dashboard actions and alerts
This topic describes the actions you can perform via the Dashboard during runtime, and the LG alerts displayed in the dashboard.
You can configure a load test to enable you to pause the schedule while the test is running. This gives you more control over the test's workload.
You can pause the scheduling multiple times during a test run, but the total pause time cannot exceed the maximum time that you set in the General settings. To enable schedule pausing, before running the test, select the load test's General settings > Pause scheduling up to check box. For details, see Pause scheduling.
- To pause scheduling when viewing an active test run in the dashboard, click the Pause scheduling button on the toolbar. A counter displays the available time that the schedule can been paused during the run.
- To resume, click .
In the test run's report, the Pause Scheduling table displays the pause and resume instances that occurred during the run. For details, see Report.
You can dynamically increase or decrease the number of running Vusers during the test run. In order to modify the number of running Vusers, make sure you have the Add Vusers option selected in the test's General settings. For details, see Add Vusers.
To change the number of running Vusers during a test run:
Click the Change Load button .
In the Change Load dialog box, enter the number of users that you want to assign to the displayed scripts. You cannot exceed the maximum number of users displayed.
When you remove Vusers, you are actually suspending them. If you later decide to add more Vusers, the suspended Vusers are added first.
If the script was created using VuGen, enable the Simulate a new user on each iteration option (Runtime Settings > Browser Emulation) to allow you to resume suspended Vusers during a test run.
Suspended Vusers run a vuser_end() action at the conclusion of the test. This may cause unexpected behavior in the test results.
Note: You cannot change the load for scripts that contain rendezvous points.
The Alerts section in the right-hand side of the summary bar, alerts you to problems that were found during the test run. These alerts are visible both during and after test runs. For each problem category, the number of issues is displayed. The number of unread notifications, if there are any, is further displayed in a red badge.
To view the problems, click on a category name. A horizontal ellipsis indicates that a dropdown is available. A new window opens with a list of all the problems in the category. By default, the list is sorted by the problem start time. You can sort the list by any of the metrics included in the list by clicking a column header.
The following problem categories are displayed in the Alert bar:
|SLA Warnings||Lists transactions that broke the defined SLA threshold.||
Note: When you add a problem's metric to the dashboard (either in an existing graph or in a new graph), the dashboard's time frame is automatically adjusted to that of the problem's time frame. This affects all the graphs included in the dashboard.
Lists measurements that significantly changed their behavior.
Click > to open an anomaly and view all the relevant metrics it contains.
Shows the number of errors encountered while running the scripts in the test.
Lists scripts, for each load generator, that had any of the following problems:
|Click the Add to Dashboard button to add this information as a new table on the dashboard.|
Lists the number of errors encountered during a load test that could not be sent to Splunk.