Before running tests
This section describes some preparation work before running tests.
Set notification and On Failure rules
You can set test set notification rules and On Failure rules to define what action ALM takes when selected events occur during a test set run.
Note: This is available for Functional and Default test sets only.
To set On Failure rules for a test set:
-
From the test set tree, select the target test set, and go to the Automation tab.
-
In the On Automated Test Failure section, configure the following:
Option Description Rerun test Controls whether ALM reruns an automated test if it fails.
If selected, the following options are available:
-
Maximum test reruns. Specifies the number of times an automated test should be rerun upon failure.
-
Cleanup test before rerun. Runs a cleanup test before each test rerun. Click the down arrow to select a cleanup test.
On final failure Specifies what ALM does on the final failure of any test in the test set.
Includes the following options:
-
Do nothing.
-
Stop the test set.
-
Rerun the test set. Runs the test set again up to the maximum number of times as specified in the Maximum test set reruns box.
Available for: Default test sets only.
Maximum test set reruns The number of times a test set should be rerun on final failure of any test in the test set.
Settings per test
Opens the On Test Failure dialog box, displaying only automated tests in the test set. It enables you to change the default failure rules for a test in the test set. These rules instruct ALM in the event that an automated test in the test set fails.
To set the number of times specific automated tests should be rerun upon failure:
-
Select one or more automated tests.
-
Select a number in the Reruns box.
Alternatively, select the number from the Number of Reruns column.
To select a cleanup test for specific automated tests:
-
Select one or more automated tests.
-
Click the Cleanup Test button. Select a cleanup test.
Alternatively, click the down arrow in the Cleanup Test Before Rerun column.
To copy settings from one automated test to another:
-
Select a test and click Copy Settings.
-
Select another test and click Paste Settings.
To set the settings of a specific automated test as the default failure rules:
-
Select the automated test.
-
Click Set to Default.
To clear the settings of a specific automated test:
-
Select the automated test.
-
Click Clear.
-
To set notification rules for a test set:
-
From the test set tree, select the target test set, and go to the Automation tab.
-
In the Notification section, configure the following:
UI Element
Description
Send email Notifies a specified user if any of the selected events occur.
Includes the following options:
For Default test sets:
-
Any test finishes with status "Failed".
-
Environmental failure (network problems, hardware failure, etc.). Environmental failure could include, for example, function calls not returning, access violations, version incompatibility between application components, a missing DLL, or inadequate permissions.
-
All tests in the Automatic Runner that were run have finished.
For Functional test sets:
- Send email in the event of any test with status "Failed".
To
The user who should receive the email.
Enter a valid email address or user name. Alternatively, click the To button to select users.
For Functional test sets, the default e-mail sender is the timeslot creator.
<Message area> The text for the email ALM sends to the specified user. -
To specify who receives execution summary after test set execution:
-
From the test set tree, select the target test set, and go to the Automation tab.
-
In the Execution Summary section, configure the following:
Option Description Send summary of results after test set execution Sends a summary report of test results in an email to specified users after test set execution is complete.
Select File
Opens the Select Columns dialog box, enabling you to specify which test run fields are displayed in the execution summary.
Available for: Default test sets only.
To
Enter a valid email address or user name. Alternatively, click the To button to select users.
Manage remote hosts for Default test sets
You can create a list of hosts for remote Default test set execution. You can also organize hosts into groups to be used for a specific project.
To manage remote hosts for Default test sets:
-
In the Test Lab module, from the Test Sets menu, select Host Manager.
-
To add a host to the Hosts list, click New Host. Provide the host name and description.
-
To add hosts from the Network Neighborhood directory, from Add All on the Network, select one of the following options:
Option Description Add All on the Network ALM scans the Network Neighborhood directory and inserts each host found into the Hosts list. Synchronize Hosts in the Project with Hosts on Net Synchronizes the hosts in the Hosts list with the hosts in the Network Neighborhood directory. ALM adds hosts found in the Network Neighborhood directory and deletes hosts that were not found in the Network Neighborhood directory. -
To create a host group, click New Host Group. Provide the host group name and description.
-
To add a host to a host group, from the Hosts grid, select the target host, from the Groups list, select the target host group, and click the Add Host to Host Group arrow.
Specify host criteria for test instances in Functional Test sets
Note: This is not available for Default test sets.
You allocate a testing host for a test instance in a Functional test set by specifying criteria in a host reservation. You can reserve a particular host from the project host pool. Alternatively, you can provide ALM with criteria from which to dynamically select hosts from the host pool allocated to your project. Host criteria include host purpose, location, and attributes.
-
Select a test instance.
-
In the Test Instance Details dialog box, choose a set of criteria from the options listed in the Testing Host field.
For details about testing hosts in ALM, and managing testing hosts in Lab Management, see Manage lab resources.
Configure execution settings for a test instance
Note: This is not available for Performance test sets.
You can view or set the test parameter values for a manual or automated test instance. You can choose a remote host on which to run the manual or automatic test instance. You can also view and edit the On Failure rules for an automated test instance.
-
In the Execution Grid or the Execution Flow tab, right-click a test and select Test Instance Details.
-
In the Test Instance Details dialog box, click the Execution Settings tab.
-
To view or set actual parameter values for manual test instances, click the Parameters tab.
The Parameters tab shows all the test parameters that are used in the test steps, including parameters of called tests that have not already been assigned actual values. Actual values that you assign for parameters in the test instance are used for all test runs of the test instance.
User interface elements are described below:
UI Element
Description
Copy Default Values
Uses a parameter's default value as its actual value. Select a parameter and click the Copy Default Values button.
Actual Value The value that is used for the parameter during the test run.
To add or modify the actual value, enter the value in the Actual Value column.
Default Value The default value of the selected parameter.
Description A description of the parameter.
Parameter Name The parameter name.
-
To set parameter values and other configuration options for automated test instances, click the Automated tab.
UI Element
Description
<Automated test configuration options> You may be able to set additional configuration options, depending on the type of automated test you are running. For details, refer to the user guide for your automated test.
Parameter Value Displays the value of each parameter.
Under Value, you can edit parameter values. Your changes are implemented in the next test run.
-
To view and edit the On Failure rules for an automated test instance, click the Run Event tab.
Note: The settings override the default On Failure rules you set for the test set. For details about the default rules, see Set notification and On Failure rules.
UI Element
Description
Cleanup test before rerun Specifies the cleanup test ALM runs before each rerun of the selected test.
Maximum test reruns Specifies the number of times an automated test should be rerun upon failure.
On failure Specifies the action for ALM to take when a test run fails. Options include:
-
Do nothing.
-
Stop the test set.
Note: This option is not available for Functional test sets.
-
Rerun the test. Runs the test again up to the maximum number of times, as specified in the Maximum test reruns box.
-
-
(For BPT tests only) To view and edit the run-time values for each parameter of each iteration of a business process test instance, click the Test Iterations tab.
UI Element
Description
Add Iteration
Adds an iteration for the entity (component, group, test, or flow).
Delete Iteration
Deletes the selected iteration for the entity (component, group, test, or flow).
Select Iterations
Opens the Select Iterations dialog box, which enables you to select which of the defined iterations you want to run during the test run. You can specify one, all, or a range of iterations.
Import/Export
Enables you to import component parameter values for iterations from a .csv (comma separated value) file, and to save component parameter values to a .csv file.
<values> Displays the actual value of the input parameter. You can modify the value by clicking the arrow in the relevant cell, which opens the Set Value dialog box.
If no value is specified, the default value for that parameter is used. If no default value is specified, no value is displayed and the entity may not run correctly.
Values are saved and compared as strings, but can be specified according to different value types.
Test or flow parameters (parameters whose values are to be taken from the business process test or flow), are shown in { } brackets. This instructs ALM to treat the parameter as a parameter and not as a fixed value.
<parameter columns> Displays the names for each parameter in the displayed iteration.
Iteration # columns/rows Displays the current run-time values for each parameter in each iteration.
Parameter Description Displays the description for the parameter and its original source entity (business component, test, or flow).
Parameter descriptions are initially entered in the module in which they were created (the Business Components module or the Test Plan module), or in UFT One for automated components.
Value columns Displays the run-time value for each parameter in the displayed iteration.
-
To view dynamic data settings for business process and UFT One test configurations, click the Data tab.
This tab is available for test configurations that access dynamic data. For details, see Associate dynamic data with test configurations. Some information in this tab is read-only.
Schedule test runs
You can specify a date and time, and set conditions for executing a test instance in a test set.
Test run schedules overview
You can control the execution of test instances in a test set. Using the Execution Flow tab of a test set, you can specify a date and time, and set conditions for executing a test instance.
-
Condition. A condition is based on the results of another specified test instance in the Execution Flow. By setting conditions, you can instruct the Test Lab module to postpone execution of the current test instance until the other specified test instance has either finished running or passed.
-
Date and time. You can specify a test instance to run at any time or at specified time.
-
Order. Set the sequence in which to execute the test instances.
To schedule a test run:
-
From the test set tree, select the target test set and click the Execution Flow tab.
-
To specify the execution conditions of a test instance, right-click the test instance and select Test Run Schedule. In the Execution Condition tab, the Test runs only if grid list all existing conditions of the test instance.
To add a condition, click New Execution Condition and specify the following:
Field Description Test Select the test instance on which you want the current test instance to be dependent. is Specify the execution condition. Available options include:
-
Finished. Executes the current test instance only if the specified test instance has finished executing.
-
Passed. Executes the current test instance only if the specified test instance has finished executing and has passed.
Comments Comments regarding the condition. Alternatively, you can add a condition directly in the execution flow. Click a test icon (not the test name) and drag the arrow to another test. By default, the execution condition is set to Finished. To change the condition, double-click the condition arrow and select Passed.
-
-
To specify the execution data and time of a test instance, right-click the test instance and select Test Run Schedule. In the Time Dependency tab, specify the following:
Note: Alternatively, you can schedule the test run date and time by clicking the Add Time Dependency To Flow button, and linking the arrow from the icon to a test instance. Double-click the icon to set the time.
Option Description Run at any time The test instance runs at a non-specific time. Run at specified time The test instance runs at a specific time. You can specify the following:
-
Date. Indicates the date for running the test instance.
-
Time. Indicates the time for running the test instance.
-
-
To order test instances, select the test icons of the test instances, right-click any selected test instance and select Order Test Instances. Use the up and down arrow to change the execution order of these test instances.
Alternatively, you can use the Order Test Instances button in the Execution Grid tab to change the order.
View execution flow diagram
You can change the way the Execution Flow is displayed. This includes zooming in and out of the diagram, rearranging the tests in a hierarchical layout, refreshing the diagram, and displaying full test names in the diagram. You can also copy the diagram to the Clipboard.
Zoom in and out | Click Zoom In, Zoom Out, or Fit in Window to change the magnification level of the Execution Flow. |
Arrange tests in a hierarchical layout | Click Arrange Layout to arrange the tests in the Execution Flow diagram in a hierarchical layout, enabling you to view relationships between different tests. |
Display full test names | To display full test names in the diagram, right-click the blank area of the diagram and select Display Full Test Names. |
Test run schedules legend |
You can tell from the following legend to understand how test instances in a test set is scheduled to run.
|
Draft runs
If you set a test run as draft run, the run results are ignored. This section provide details about draft runs.
Draft runs overview
Draft runs enable you to try tests while they are still in development, or after they have been modified. For example, you may want to test that each step description is formulated correctly, or try only a part of a large test script.
When you set a test instance as a draft run:
-
The outcome of the run does not impact the execution status of the test, the status of the test instance, or the coverage status.
-
ALM ignores draft runs when calculating the remaining number of test instances to run, and when displaying results in coverage, progress, and live analysis graphs.
Note: Setting a test run as a draft requires the appropriate user permissions. For details, see Manage user groups and permissions.
Set test runs as draft runs
You can set a test run as a draft in the following ways:
-
Before a test run. You can mark a test that you are running manually as a draft run before performing the test. In the Manual Runner: Run Details page, set the value of the Draft Run field to Y.
-
After a test run. You can mark any test run as a draft by modifying the Draft Run field for the run. For details on accessing run details, see Test Runs.
When you change the Draft Run value for a run, ALM recalculates all relevant statistics, and updates relevant status fields. Graphs display updated results.
See also: