Run UFT One tests in parallel using a command line

Use a command line to trigger the ParallelRunner tool to run multiple UFT One tests in parallel.

Run tests in parallel

The ParallelRunner tool is located in the <UFT One installation folder>/bin directory.

Use a command line to trigger the tool for parallel testing:

  1. Close UFT One. ParallelRunner and UFT One cannot run at the same time.

  2. Run your tests in parallel from the command line interface by configuring your test run using one of the following methods:

Tests run using ParallelRunner are run in Fast mode. For details, see Fast mode.

For more information, see:

Back to top

Define test run information via command line

Relevant for GUI Web and Mobile tests and API tests

When running tests from the command line, run each test separately with individual commands.

For GUI tests, each command runs the test multiple times if you provide different alternatives to run. You can provide alternatives for the following: 

  • Browsers
  • Devices
  • Data tables
  • Test parameters

To run tests from the command line, use the syntax defined in the sections below. Use semicolons (;) to separate the alternatives.

Parallel runner runs the test as many times as necessary, to test all combinations of the alternatives you provide.

For example: 

Run the test three times, one for each browser:

ParallelRunner -t "C:\GUIWebTest" -b "Chrome;IE;Firefox"

Run the test four times, one for each data table on each browser:

ParallelRunner -t "C:\GUIWebTest" -b "Chrome;Firefox" -dt ".\Datatable1.xlsx;.\Datatable2.xlsx"

Run the test 8 times, testing on each browser, using each data table with each set of test parameters.

ParallelRunner -t "C:\GUIWebTest" -b "Chrome;Firefox" -dt ".\Datatable1.xlsx;.\Datatable2.xlsx" -tp "param1:value1, param2:value2; param1:value3, param2:value4"

Tip: Alternatively, reference a .json file with multiple tests configured in a single file. In this file, separately configure each test run. For details, see Manually create a .json file to define test run information.

Web test command line syntax

From the command line, run each web test using the following syntax:

ParallelRunner -t <test path> -b <browser> -r <results path>

For example:

ParallelRunner -t "C:\GUIWebTest_Demo" -b "Chrome;IE;Firefox" -r "C:\parallelexecution"

If you specify multiple browsers, the test is run multiple times, one run on each browser.

Mobile test command line syntax

From the command line, run each mobile test using the following syntax:

ParallelRunner -t <test path> -d <device details> -r <results path>

For example:

ParallelRunner -t "C:\GUITest_Demo" -d "deviceID:HT69K0201987;manufacturer:samsung,osType:Android,osVersion:>6.0;manufacturer:motorola,model:Nexus 6" -r "C:\parallelexecution"

 

If you specify multiple devices, the test is run multiple times, one run on each device.

Wait for an available device

You can instruct the ParallelRunner to wait for an appropriate device to become available instead of failing a test run immediately if no device is available. To do this, add the -we option to your ParallelRunner command.

API test command line syntax

From the command line, run each API test using the following syntax:

ParallelRunner -t <test path> -r <results path>

For example:

ParallelRunner -t "C:\APITest" -r "C:\API_TestResults"

Command line options

Use the following options when running tests directly from the command line.

  • Option names are not case sensitive.

  • Use the a dash (-) followed by the option abbreviation, or a double-dash (--) followed by the option's full name.

-t

--test

String. Required. Defines the path to the folder containing the UFT One test you want to run.

If your path contains spaces, you must surround the path with double quotes (""). For example: -t ".\GUITest Demo"

Multiple tests are not supported in the same path.

-b


--browsers

String. Optional. Supported for web tests.

Define multiple browsers using semicolons (;). ParallelRunner runs the test multiple times, to test on all specified browsers.

Supported values include:

Chrome, Chrome_Headless, ChromiumEdge, Edge, Firefox, Firefox64, IE, IE64, Safari

Notes:

  • This value overrides any settings defined in the Record and Run Settings, and any browser parameterization configured in the test. The WebUtil.LaunchBrowser method is not affected.

  • We recommend using consistent casing for the browser name, to avoid separate columns in chromium run result reports for the same browser type. For more details, see Sample ParallelRunner response and run results.

  • Safari is supported via testing on a remote Mac machine. For more details, see Connect to a remote Mac computer.

-d

--devices

String. Optional. Supported for mobile tests.

Define multiple devices using semicolons (;). ParallelRunner runs the test multiple times, to test on all specified devices.

Define devices using a specific device ID, or find devices by capabilities, using the following attributes in this string:

  • deviceID. Defines a device ID for a specific device in the Digital Lab device lab. This attribute cannot be used with other device capability attributes.

  • manufacturer. Defines a device manufacturer.

  • model. Defines a device model.

  • osType. Defines a device operating system.

    One of the following: Android, iOS, Windows Phone.

    If this value is not defined, UFT One uses the osType defined in the test's Record and Run settings as a default.

  • osVersion. Defines a device operating system version. One of the following:

    • Any. Leaves the version number undefined.

    • A specific version. For example: 10.0

    • A boolean operator and version value. For example: >10.0, <10.0, >=10.0, or <=10.0

Note: Do not add extra spaces around the commas, colons, or semicolons in this string. We recommend always surrounding the string in double quotes ("") to support any special characters.

For example:

–d "manufacturer:apple,osType:iOS,osVersion:>=11.0"

-dt

--dataTables

String. Optional. Supported for web and mobile tests.

Defines the path to the data table you want to use for this test. UFT One uses the table you provide instead of the data table saved with the test.

Define multiple data tables using semicolons (;). ParallelRunner runs the test multiple times, to test with all specified data tables.

If your path contains spaces, you must surround the path with double quotes ("").

For example: -dt ".\Data Tables\Datatable1.xlsx;.\Data Tables\Datatable2.xlsx".

-o

--output

String. Optional. Determines the result output shown in the console during and after the test run.

Supported values include:

  • Normal. (Default) Run output is automatically updated in the console.

    For example, the run status changes from Pending to Running, and then to Pass or Fail.

  • Static. The report path is displayed at the beginning of the test run, and then the run status is shown when the test is complete.

  • Disable. No output is shown in the console at all.

-r

--reportPath

String. Optional. Defines the path where you want to save your test run results. By default, run results are saved at %Temp%\TempResults.

If your string contains spaces, you must surround the string with double quotes (""). For example: - r "%Temp%\Temp Results"

-rl

--report-auto-launch

Optional. Include in your command to automatically open the run results when the test run starts. Use the -rr--report-auto-refresh option to determine how often it's refreshed.

For more details, see Sample ParallelRunner response and run results.

-rn

--reportName

String. Optional. Defines the report name, which is displayed in the browser tab when you open the parallel run's summary report. By default, the name is Parallel Report.

Customize this name to help distinguish between runs when you open multiple report at once. UFT One also adds a timestamp to each report name for this purpose.

If your string contains spaces, you must surround the string with double quotes (""). For example: - rn "My Report Title"

-rr

--report-auto-refresh

Optional. Determines how often an open run results file is automatically refreshed, in seconds.

For example: -rr 5

Default, if no integer is presented: 5 seconds.

-tp

--testParameters

String. Optional. Supported for web and mobile tests.

Defines the values to use for the test parameters. Provide a comma-separated list of parameter name-value pairs.

For test parameters that you do not specify, UFT One uses the default values defined in the test itself.

For example: -tp "param1:value1,param2:value2"

Define multiple test parameter sets using semicolons (;). ParallelRunner runs the test multiple times, to test with all specified test parameter sets.

For example: -tp "param1:value1, param2:value2; param1:value3, param2:value4"

If your parameter value includes commas (,), semicolons (;), or colons (:), surround the value with a backslash followed by a double quote (\").

For example, if the value of TimeParam is 10:00;00, use:

-tp "TimeParam:\"10:00;00\""

Note: You cannot include double quotes (") inside a parameter value using the command line options. Instead, use a JSON configuration file for your ParallelRunner command.

-we

--wait-for-environment

Optional. Supported for mobile tests.

Include in your command to instruct each mobile test run to wait for an appropriate device to become available instead of failing immediately if no device is available.

Back to top

Manually create a .json file to define test run information

Relevant for GUI Web, Mobile, and Java and API tests

Manually create a .json file to specify multiple tests to run in parallel and define test information for each test run. Then reference the .json file directly from the command line.

To create a .json file and start parallel testing with the file:

  1. In a .json file, add separate lines for each test run. Use the .json options to describe the environment, data tables, and parameters to use for each run.

    Tip: You can use the ParallelRunner UI tool to automatically create a .json file. For details, see Configure your parallel test run.

  2. For GUI mobile testing, you can include the Digital Lab connection settings in the .json file together with the test information. For how to define the connection settings, see Sample .json file.

    The .json file takes precedence over the connection settings defined in Tools > Options > GUI Testing > Mobile in UFT One.

  3. Reference the .json file in the command line using the following command line option:

    -c

    String. Defines the path to a .json configuration file.

    Use the following syntax to start parallel testing:

    ParallelRunner -c <.json file path>

    For example:

    ParallelRunner -c parallelexecution.json

  4. Add additional command line options to configure your run results. For details, see -o, -rl and -rr.

    These are the only command line options you can use together with a .json configuration file. Any other command line options are ignored.

Sample .json file

Refer to the following sample file. In this file, the last test is run on both a local desktop browser and a mobile device. See also Sample .json configuration file with synchronization.

Option reference for creating a .json file

Define the following options (case sensitive) in a .json file:

reportPath

String. Defines the path where you want to save your test run results.

By default, run results are saved at %Temp%\TempResults.

reportName

String. Defines the report name, which is displayed in the browser tab when you open the parallel run's summary report. By default, the report name is the same as the .json file name.

Customize this name to help distinguish between runs when you open multiple report at once. UFT One also adds a timestamp to each report name for this purpose.

test

String. Defines the path to the folder containing the UFT One test you want to run.

reportSuffix

String. Defines a suffix to append to the end of your run results file name. For example, you may want to add a suffix that indicates the environment on which the test ran.

Example: "reportSuffix":"Samsung6_Chrome"

Default: "[N]". This indicates the index number of the current result instance, if there are multiple results for the same test in the same parallel run.

Notes: 

  • The run results file name consists of <test name>_<reportSuffix> and must be unique. Therefore, do not provide the same suffix for multiple runs of the same test.
  • When deciding on a suffix, keep in mind that if the resulting file name is longer than 22 characters, the file name will be truncated in the output that ParallelRunner provides in the command window.
browser

String. Defines the browser to use when running the web test.

Supported values include:

Chrome, Chrome Headless, ChromiumEdge, Edge, IE, IE64, Firefox, Firefox64, Safari

Notes:

  • This value overrides any settings defined in the Record and Run Settings, and any browser parameterization configured in the test. The WebUtil.LaunchBrowser method is not affected.
  • We recommend using consistent casing for the browser name, to avoid separate columns in the run result report for the same browser type. For more details, see Sample ParallelRunner response and run results.
  • Safari is supported via testing on a remote Mac machine. For more details, see Connect to a remote Mac computer.
  • UFT One 23.4 and earlier: Edge refers to Microsoft Edge Legacy.

    UFT One 24.2 and later: Edge refers to Chromium-based Microsoft Edge. The ChromiumEdge value is supported for backward compatibility only.

deviceID

String. Defines a device ID for a specific device in the Digital Lab device lab.

Note: When both deviceID and device capability attributes are provided, the device capability attributes will be ignored.

manufacturer

String. Defines a device manufacturer.

model

String. Defines a device model.

osVersion

String. Defines a device operating system version.

One of the following:

  • Any. Leaves the version number undefined.

  • A specific version. For example 10.0

  • A boolean operator and version value. For example: >10.0, <10.0, >=10.0, or <=10.

osType

Defines a device operating system.

One of the following:

  • Android
  • iOS
  • Windows Phone

dataTable

String. Supported for GUI Web, Mobile, and Java tests.

Defines the path to the data table you want to use for this test run. UFT One uses the table you provide instead of the data table saved with the test.

Place the dataTable definition in the section that describes the relevant test run.

testParameter

String. Supported for GUI Web, Mobile, and Java tests.

Defines the values to use for the test parameters in this test run. Provide a comma-separated list of parameter name-value pairs.

For example: 

"testParameter":{
      "para1":"val1",
      "para2":"val2"
    }

Place the testParameter definition in the section that describes the relevant test run.

If your parameter value includes a backslash (\) or double quote ("), add a backslash before the character. For example: "para1":"val\"1" assigns the value val"1 to the parameter para1.

For test parameters that you do not specify, UFT One uses the default values defined in the test itself.

waitForEnvironment

Set this option to true to instruct each mobile test run to wait for an appropriate device to become available instead of failing immediately if no device is available.

Hostname / Port

The IP address and port number of your Digital Lab server.

Default port: 8080

AuthType

The authentication mode to use for connecting to Digital Lab.

Available values:

  • 1: Access key authentication. Authenticate using access keys you receive from Digital Lab.

  • 0: Basic authentication. Authenticate using user name and password.

Username / Password The login credentials for the Digital Lab server.

ClientId / SecretKey / TenantId

The access keys received from Digital Lab.

Back to top

Use conditions to synchronize your parallel run

To synchronize and control your parallel run, you can create dependencies between test runs and control the start time of the runs. (Supported only when using a .json file in the ParallelRunner command).

Using conditions, a test run can do one or more of the following: 

  • Wait for a number of seconds before beginning to run.
  • Run after a different test runs.
  • Begin only after a different test runs and reaches a specific status.

For each test run defined in the .json file, create conditions that control when the run starts. Build the conditions under the test section, using the elements described below.

You can create the following types of conditions: 

  • Simple conditions, which contain one or more wait statements.
  • Combined conditions, which contain multiple conditions and an AND/OR operator, which specifies whether all conditions must be met.
testRunName The name of the test run. Used by other tests runs in a waitForTestRun or waitForRunStatuses condition.
condition

The main element containing all of the wait conditions used to control this test's run.

Can contain wait options or nested conditions elements.

conditions

An element that contains an array of one or more simple conditions or additional combined conditions.

conditions elements can be contained in combined conditions.

operator

Indicates whether the conditions in the conditions element following the operator must all be met.

Possible values: 

  • AND. All of the conditions following the operator must be met before the test run begins.
  • OR. The test run begins as soon as one of the conditions following the operator is met.
waitSeconds

Instructs ParallelRunner to wait the specified number of seconds before beginning this test run.

waitForTestRun

Instructs ParallelRunner to run this test run after the specified run ends. Specify the run to wait for by using its testRunName.

You can combine the waitForTestRun option with other options:

  • Use waitSeconds to wait a number of seconds after the specified run ends.
  • Use waitForRunStatuses to wait until the specified test reaches a specific status.

For example: The following condition instructs ParallelRunner to run test run 1 10 seconds after run 3 reaches one of the specified statuses.

"testRunName": "1", 
    "condition":{ 
        "waitForTestRun": "3",
        "waitSeconds": 10,
        "waitForRunStatuses": ["passed", "failed", "warning", "canceled"]
}

Note: When combining multiple wait options, the implied operation is AND.

waitForRunStatuses

Instructs ParallelRunner to run this test run after the specified run reaches a specific status.

  • Specify the run to wait for by using its testRunName.

  • Provide a string of one or more test statuses to wait for. The condition is met once the specified test reaches one of the specified statuses.

    For example: "waitForRunStatuses": ["passed", "failed", "warning", "canceled"]

Sample .json configuration file with synchronization

The following is a sample .json configuration file that provides a synchronized parallel run:

Copy code
{
  "reportPath": "C:\\parallelexecution",
  "reportName": "My Report Title",
  "waitForEnvironment": true,
  "parallelRuns": [{
    "test": "C:\\Users\\qtp\\Documents\\UFT One\\GUITest_Mobile",
    "testRunName": "1",
    "condition": { 
          "operator": "AND",
          "conditions": [{
            "waitSeconds": 5
            }, {
              "operator": "OR", 
              "conditions": [{
                "waitSeconds": 10,
                "waitForTestRun": "2"
              }, {
                "waitForTestRun": "3",
                "waitSeconds": 10,
                "waitForRunStatuses": ["failed", "warning", "canceled"]
              }
            ] 
          }
        ]  
      },
    "env": {
      "mobile": {
        "device": {
          "osType": "IOS"
        }
      }
    }
  }, {
    "test": "C:\\Users\\qtp\\Documents\\UFT One\\GUITest_Mobile",
    "testRunName": "2",
    "condition": {
      "waitForTestRun": "3",
      "waitSeconds": 10
    },
    "env": {
      "mobile": {
        "device": {
          "osType": "IOS"
        } 
      }
     }
  }, {
    "test": "C:\\Users\\qtp\\Documents\\UFT One\\GUITest_Mobile",
    "testRunName": "3",
    "env": {
      "mobile": {
        "device": {
          "deviceID": "63010fd84e136ce154ce466d79a2ebd357aa5ce2"
        }
      }
    }
  }]
}

When running ParallelRunner with this .json file, the following happens:

  • Test run 3 waits for device 63010fd84e136ce154ce466d79a2ebd357aa5ce2.

  • Test run 2 waits 10 seconds after test run 3 ends, and then waits for an available IOS device.

  • Test run 1 first waits for 5 seconds after the ParallelRunner command is run.

    Then it waits 10 seconds more after run 2 ends or run 3 reaches a status of failed, warning, or canceled.

    Then it waits until an iOS device is available.

Back to top

Stop the parallel execution

To quit the test runs from the command line, press CTRL+C on the keyboard.

  • Any test with a status of Running or Pending, indicating that it has not yet finished, will stop immediately.

  • No run results are generated for these canceled tests.

Back to top

Sample ParallelRunner response and run results

The following code shows a sample response displayed in the command line after running tests using ParallelRunner.

>ParallelRunner -t ".\GUITest_Demo" -d "manufacturer:samsung,osType:Android,osVersion:Any;deviceID:N2F4C15923006550" -r "C:\parallelexecution" -rn "My Report Name"

Report Folder: C:\parallelexecution\Res1

ProcessId, Test, Report, Status

3328, GUITest_Demo, GUITest_Demo_[1], Running -

3348, GUITest_Demo, GUITest_Demo_[2], Pass

In this example, when the run is complete, the following result files are saved in the C:\parallelexecution\Res1 directory:

  • GUITest_Demo_[1]
  • GUITest_Demo_[2]

The parallelrun_results.html file shows results for all tests run in parallel. The report name in the browser tab is My Report Name.

The first row in the table indicates any rules defined by the ParallelRunner commands, and subsequent rows indicate the result and the actual environment used during the test.

Tip: For each test run, click the HTML report icon to open the specific run results. This is supported only when UFT One is configured to generate HTML run results.

For example:

Note: The report provides a separate column for each browser name, using a case-sensitive comparison for browser names. Therefore, separate columns would be provided for IE and ie.

Back to top

See also: