Configure your parallel test runs using a UI
Use the Parallel Runner graphical user interface to configure your parallel test runs. You can also define conditions that synchronize between test runs.
UFT One's Parallel Runner UI provides a visual way to configure a parallel test run plan and create a JSON-formatted configuration file. You can specify multiple types of tests to run in parallel and define the environment, data table, and parameters to use for each test run.
You can open the Parallel Runner UI from the Windows Start menu or from UFT One's Tools menu.
Note: Close UFT One before you start your parallel test run using the Parallel Runner UI.
Perform the following steps to configure your parallel test run:
To configure your parallel test run:
Open the UFT One Parallel Runner UI.
Define test run information, including the ID, type, environment information, data table, and parameters for each test run.
Define conditions to synchronize between test runs.
For details, see Use conditions to synchronize your parallel run.
Click the icon in the bottom-left corner to define a path for saving the parallel test report.
Click the icon in the bottom-left corner to specify a title for the test report.
For information about how to start testing in parallel by referencing a .json file from a command line, see Step 2 in Manually create a .json file to define test run information.
Note: You can drag and drop an existing .json configuration file to the Parallel Runner UI to modify your test run configuration.
To start your parallel test run, click the Run button in the bottom-right corner.
You can click the gear icon to select whether to open the report during the test run and define the interval for refreshing the report.
To stop parallel execution, press CTRL+C on the keyboard.
Any test with a status of Running or Pending, indicating that it has not yet finished, will stop immediately.
No run results are generated for these canceled tests.
Define the following information for each test run in the Parallel Runner UI:
The ID of a test.
Click the ID column of each row and enter an ID. This ID is unique and useful when you specify conditions for each test run.
The location of a test.
Click the Test Location column of each row, and then click the folder icon to select the test file.
The test type, including GUI-Web Test, GUI-Mobile Test, GUI-Java Test, and API Test.
Click the Test Type column of each row and specify the type for each test.
This is mandatory if you set Test Type to GUI-Web Test or GUI-Mobile Test.
|Data Table||If your test uses a data table, click the Data Table column of each row, and then click the folder icon to select the corresponding data table.|
Click View Details to collapse the Device Details section and define test parameters.
Data types Number, Boolean, and String are supported.
|Run Name||Specify a test run name for each test.|
Note: For API tests, you can only define the ID, Test Location, Test Type, and Run Name.
To synchronize and control your parallel run, you can create dependencies between test runs and control the start time of the runs.
You can create the following types of conditions:
- Simple conditions, which contain one or more wait statements.
- Combined conditions, which contain multiple conditions and an AND/OR operator, which specifies whether all conditions must be met.
To configure conditions for a test run:
In a row, click View Details to collapse the Conditions section.
Specify the condition syntax.
Select Global after Wait, and specify the time to wait before starting the test run.
Select a test ID after Wait, and specify the status to reach by the specified run before starting the test run, or specify the time to wait before starting the test run.
Click the icon to add the operator. You can add one of the following operator:
- AND. All of the conditions following the operator must be met before the test run begins.
- OR. The test run begins as soon as one of the conditions following the operator is met.