Use Azure DevOps Server to trigger a parallel testing task
GUI Web and Mobile tests only
This topic describes how to trigger a parallel testing task to run GUI Web and GUI Mobile tests in parallel from Azure DevOps Server (formerly known as TFS).
Note: This topic describes working with UFT One Azure DevOps extension version 24.2.*. To benefit from the latest functionality, we recommend updating existing tasks' versions to 24.2.*. When using this extension version, make sure that you installed the corresponding UFT.zip file from the ADM-TFS-Extension GitHub repository.
Step 1: Create a pipeline
Note: If you are using TFS, skip this step.
In Azure DevOps Server:
-
Create a build pipeline or a release pipeline, with the Empty job template.
Note: A build pipeline is the pipeline type created when you do not explicitly create a release pipeline.
-
Select the agent pool that includes the agent on which you want to run your tests.
-
In the pipeline variables, add a UFT_LAUNCHER variable. The variable's value should be the full path to the UFTWorking folder.
For more details, see the Microsoft Azure documentation.
Step 2: Add a task to your pipeline
Add a UFT One task and place the step in the correct place in the build order.
Note: If you are working with a release pipeline, add the task in the relevant stage, then place the step in the build.
-
From the Task catalog, select the Test tab. A list of all available test tasks is displayed.
-
From the Test tab, select the UFT One Parallel Test Run task and click Add. A new, empty task is added as part of your build pipeline.
Note: If you are working with a release pipeline, the task is added as part of your deployment process.
Step 3: Configure your build step
Provide the following information for your build step.
-
Specify the display name.
By default, the Azure DevOps Server CI system uses a preset descriptor for the task. Specify this field and provide a more meaningful name for your step.
-
Select a test type.
Depending on the selected test type, the controls and fields on the page vary.
-
In the Tests field, enter the test, test batch file, or folder containing the tests to run. For multiple tests, the test path, name, or test result path should be separated by a comma.
-
For GUI Web test type, select all browsers used in your tests.
-
For GUI Mobile test type, specify the following fields:
Devices Specify the list of devices. Each line must contain a single device's information.
For each device, define the following fields:
DeviceID, Manufacturer, Model, OSType, and OSVersion
You can get the field values by running the Get Digital Lab Resources task (formerly named UFT Mobile Get Resources).
If the value of DeviceID is provided for a device, the other fields will be ignored.
Example:
DeviceID: "123456789", Manufacturer: "Samsung", Model: "SM-G920F", OSType: "ANDROID", OSVersion: "7.0"
DeviceID: "123456788", Manufacturer: "Samsung", Model: "SM-G920A", OSType: "ANDROID", OSVersion: "7.5"
Server The address of your OpenText Functional Testing Lab server, in the format of http[s]://<server name>:<port>. Authentication type Select the authentication mode to use for connecting to OpenText Functional Testing Lab:
-
Basic authentication. Authenticate using user name and password.
-
Access key authentication. Authenticate using an access key you receive from OpenText Functional Testing Lab.
User name and Password. If you selected Basic authentication, enter your login credentials for the OpenText Functional Testing Lab server.
Access key. If you selected Access key authentication, enter the access key you received from OpenText Functional Testing Lab.
Use proxy settings Provide the following information if you selected to connect using a proxy:
Proxy Server: The address of your proxy server, in the format of <proxy server name>:<port>.
Use proxy credentials. Select if your proxy server requires authentication.
Proxy user name and password. The credentials used for authentication by the proxy server.
SSL enabled Specify whether to enable SSL for the OpenText Functional Testing Lab connection.
Note: Select this option only when your OpenText Functional Testing Lab server is configured to communicate over SSL.
-
-
Define the value format of the timestamp field used for reporting in the Extensions tab.
The default value is yyyy-MM-dd HH:mm:ss.
-
Specify the amount of time (in seconds) to wait if there is a problem opening or running a test. If the field is left blank, there is no timeout.
-
Select whether to generate a report which provides information on your tests' failed steps. After the run, the report is available in the Extensions tab.
Alternatively, indicate a batch file that contains a list of tests and their parameters.
- Specify multiple tests, or the same test several times with different parameters each time.
- Use the reportPath parameter to define a specific path for your test result.
The batch file should have an .mtbx extension and use the following syntax:
<Mtbx> <Test name="test1" path="c:\tests\APITest1"> <Parameter name="A" value="abc" type="string"/> .... </Test> <Test name="test2" path="c:\tests\test2"> <Parameter name="p1" value="123" type="int"/> <Parameter name="p4" value="123" type="float"/> .... </Test> <Test name="test3" path="c:\tests\APITest3" reportPath="c:\reports\APITest3"> </Mtbx>
Step 4: Upload test results to an Azure Storage location
Uploading the OpenText Functional Testing results to Azure Storage lets you to access the results from the Azure DevOps portal after the test run.
Make sure you performed the steps described in Set up Azure Storage for your test results.
Then enter the following options in your UFT One Parallel Test Run build step:
Option | Value |
---|---|
Do you want to upload the UFT report to the storage account? |
Select Yes. |
Artifacts to upload |
Select whether to upload only the html run results report, an archive of all the OpenText Functional Testing result files, or both. |
Report file name |
Accept the default file name, which is based on the pipeline name and build number, or enter a name of your choice. |
Step 5: Configure the CI system control options
Configure the control options for your build step, including:
Enabled | Specify whether the step should be run as part of this build. |
Continue on error | Instruct the CI system to stop or continue the build if there is an error on this step. |
Step 6: Set your pipeline
Set your pipeline to run:
Build pipeline: Save and queue the pipeline.
Release pipeline: Create a release and deploy the pipeline.
When the pipeline runs, the tests run as part of the task you added.
Step 7: View parallel run results
After the test run, you can view the run results in the Extensions tab.
A visual report
In the Extensions tab of the run results, you can see a report including the following parts:
-
The Run Summary shows the number of tests that ran and the percentage of each status.
-
The UFT Report section shows the test name, test type, browser type/device info, and test run status for each test, as well as the link to the OpenText Functional Testing report and archive, if they were uploaded to Azure Storage.
-
If you selected the Generate 'Failed Tests' report option, the Failed Tests section shows a detailed breakdown of failed steps of each test.
Note:
-
If you are working with a release pipeline, these results are available on the Stage level.
-
Sometimes, the Extensions tab is not displayed if you abort the job in the middle of the test run. Even if the Extensions tab is available and you selected the Generate 'Failed Tests' report option, the Failed Tests section is not shown.
Retrieve run result files
-
The detailed failure results are stored in JUnit format in the junit_report.xml file located in the UFTWorking/res/Report_<build run number> folder.
-
If the OpenText Functional Testing html report and the archive file were uploaded to Azure Storage, you can access these in the storage container.
-
If you abort a job in the middle of the test run, the results of all tests executed before the job stopped are saved into a Results###.xml file in the %UFT_LAUNCHER%\res\Report_###\ folder.
Note: In a parallel test run, tests with incorrect device or browser settings are not run and run results contain only the results of tests that are run in a pipeline.