AI-based testing
Supported for web and mobile testing
This topic explains how to use Artificial Intelligence (AI) Features in your OpenText Functional Testing for Developers tests to identify objects the way a person would. This enables you to run the same test on different platforms and versions, regardless of the objects' implementation.
AI-based testing overview
Using AI, objects are identified visually, as humans do. Object descriptions can include the control type, associated text, and, if there are multiple identical objects, an ordinal position.
Some advantages of AI-based object identification are:
-
Tests are technology agnostic, identifying objects visually, regardless of the technology details used behind the scenes.
-
Tests are easier to maintain. An object that changes location, framework, or even shape, is still identified, as long as the object remains visually similar or its purpose remains clear.
Using AI is computationally intensive. We recommend using a powerful computer to benefit from the OpenText Functional Testing for Developers AI-based testing capabilities and achieve optimal performance. For recommended system requirements, see Support Matrix.
AI-based testing test objects and methods are supported in all the SDKs. For details, see the Java, .NET, and JavaScript SDK references. Describe AI test objects and then use them in your tests. You can use AI-based test objects and technology, property-based, test objects in the same test.
Run the AI Object Inspection interface to inspect your application, or mockup images of an application, and identify objects. Create the steps you want to test, and automatically generate code to add to your test.
You can open AI Object Inspection from the Windows Start menu or from the UFT Developer menu in your IDE (Eclipse or IntelliJ IDEA).
Prerequisites
To use OpenText Functional Testing for Developers's AI-based testing capabilities, the following requirements must be met:
-
OpenText Functional Testing for Developers is installed on a computer with a relevant OS and Windows feature installed. For details, see AI-based testing.
-
The AI Object Detection feature is installed and enabled in the runtime engine settings, in the AI Object Detection tab. This setting is enabled by default.
Prepare your test project and create a web or mobile test.
To start identifying AI objects to use in your test, continue with one of the following tasks:
Inspect your application for objects
Use AI Object Inspection to identify objects in your application that you can use in your test steps.
This section describes the prerequisites for inspection, the inspection process, and how you can provide feedback on the identification, helping to shape the future of AI-based testing.
Prerequisites
Testing an application on a mobile device |
|
Testing a desktop web application |
Supported browsers: Chrome, Edge, Firefox, Internet Explorer. Headless browsers are not supported. |
Identify all AI objects displayed in your application
-
Open the AI Object Inspection window to inspect your application and detect all AI objects in it.
To open the window, do one of the following:
-
From the Windows Start menu, select AI Inspection.
-
In your IDE Click the AI Inspection toolbar button.
-
- Click Select Application.
-
Click a web application or the remote access window displaying your mobile application.
Tip:
Press ESC to return to the AI Object Inspection window without selecting an application.
Press CTRL to use the mouse for other operations on the screen before selecting an application.
The Live Application tab displays the current screen of the application, highlighting all of the detected objects.
-
Decide what type of objects you want to see:
Visual Element: The objects detected visually.
Text: Areas of text in the application.
You can choose to view one or the other, or both.
- After identifying the AI objects, you can Add AI-based steps to your tests.
Help design the future of AI-based testing in OpenText Functional Testing for Developers
After inspecting your application, you can provide feedback about the object identification. Send generic feedback OpenText to help improve the identification in the future, or attach it to a support ticket to receive assistance.
Depending on what is displayed in your version of OpenText Functional Testing for Developers, click How is the detection? Help us improve or Send Feedback. For details, see the Send AI Object Identification feedback topic in the OpenText Functional Testing Help Center.
Inspect application mockups for objects
Use AI Object Mockup Inspection to inspect application mockups and identify objects to use in your test. This enables you to design and prepare your test even before your application is fully developed.
This section describes the prerequisites for inspection, the inspection process, and how you can provide feedback on the identification, helping to shape the future of AI-based testing.
Prerequisites
Instead of opening a web or mobile application, make sure you have a local folder that contains your mockup images in .jpg, .jpeg, or .png format.
Open the AI Object Inspection window to the Mockup Images tab
You can access the AI Object Inspection Mockup Images tab from within your IDE or from the Windows Start menu.
In your IDE |
Click the AI Mockup Inspection toolbar button or select the option in the UFT Developer menu. |
In your IDE |
|
From the Windows Start menu |
|
Inspect application mockups
After opening the AI Object Inspection window to the Mockup Images tab, perform the following steps:
-
Select Web or Mobile as the inspection context and click Browse Folder to select a folder containing images.
The AI Object Inspection window inspects the image that appears first in the file name order and highlights all identified visual elements.
-
Decide whether to show Visual Element, Text, or both, to see either the objects that OpenText Functional Testing for Developers detected visually, or areas of text in the application, or both.
Tip: If necessary, you can select Web or Mobile to change the AI identification context type.
-
Click the down arrow near the folder icon to perform one of the following:
-
Select a different folder
-
Synchronize the current folder in case its content changed
-
Open the current folder in Windows Explorer
-
-
To show the image gallery and view all images in your folder, click the down arrow at the top of the pane. In the gallery, you can:
-
Browse between images in the folder.
-
Search for a specific image.
-
Display images in a Grid View or Line View.
-
Sort images by file name or modification time.
-
- After identifying the AI objects, you can Add AI-based steps to your tests.
Help design the future of AI-based testing in OpenText Functional Testing for Developers
After inspecting your application, you can provide feedback about the object identification. Send generic feedback OpenText to help improve the identification in the future, or attach it to a support ticket to receive assistance.
Depending on what is displayed in your version of OpenText Functional Testing for Developers, click How is the detection? Help us improve or Send Feedback. For details, see the Send AI Object Identification feedback topic in the OpenText Functional Testing Help Center.
Add AI-based steps to your tests
You can add AI-based steps to your test using code generated in the AI Object Inspection window or manually, using the Java, .NET, or JavaScript SDKs.
To use the code generated by the AI Object Inspection window, first make sure the correct Clipboard code language is selected in the settings. Click the cogwheel at the top right of the AI Object Inspection window, and select Java, C#, or JavaScript.
To add AI-based steps in your tests:
AIObjects are identified within the context of a web Browser object or a mobile Device object. Therefore, AIObjects are always described as children of Browser or Device objects. Before describing an AIObject, make sure to describe its parent.
For example, before you can use this AIObject, you have to describe the parent browser object in your test.
browser.describe(AiObject.class, new AiObjectDescription.Builder()
.aiClass(com.hp.lft.sdk.ai.AiTypes.TWITTER)
.locator(new com.hp.lft.sdk.ai.Position(com.hp.lft.sdk.ai.Direction.FROM_RIGHT, 0)).build()).click();
Add steps from AI objects identified by AI Object Inspection
-
Click an identified object in the AI Object Inspection window. This can be an object in the Mockup Images tab or the Live Application tab.
-
In the dialog box that opens, modify the step action and edit the value field if relevant.
The step generated for each object includes any information used to identify it uniquely, such as associated text or the object's ordinal position on the screen, if multiple identical objects are displayed. For details, see Associating text with objects and Identifying objects by relative location.
To modify object identification details such as Text, Position, and Relation, click Edit, and make your changes in the Edit step pane on the right. You can define a position or a relation but not both.
To add a position:
Click the + button for Position, then select a direction and a number.
To add a relation:
Click the + button for Relation, then click an object in the application to use as an anchor. AI Object Inspection suggests a direction, which you can accept or modify.
You can click the eye button to highlight the anchor in the application and make sure the correct object is selected.
Note: Sometimes, an object can only be described uniquely by combining multiple properties, such as text, and position or relation.
You can continue to edit the properties in the Edit step pane, even if each property on its own does not uniquely describe the object. You can copy the step code only when the combination of properties provides a unique object description. If you move to another object before completing a unique object description, your edits are discarded, and the original description is used.
-
Click Copy to copy the step, then paste the copied code into your test.
Avoid adding steps with objects that were incorrectly identified. For example, if a button is identified as a text box, or a check mark is identified as a button, such objects may be identified inconsistently and fail in subsequent test runs.
Run the added step on your application
If you added the step from an object identified on a live application, you can click the Run on application button to run the step. This tests that your step is correct, and also advances the application to the next state, re-inspecting the application for the next step.
Note: For the step to run on the correct application, the application you are testing must be the most recent application that was in focus.
You can configure time to be provided for the application to load after running the step, before re-inspecting the application. Click the dot-menu icon next to the Run on application button and configure a delay.
Add verification steps to your test (optional)
You can add verification steps to check the existence of an object using code generated in the AI Object Inspection window or manually, using the Java, .NET, or JavaScript SDKs.
To add verification steps from the AI Object Inspection window
-
Click an identified object in the AI Object Inspection window. This can be an object in the Mockup Images tab or the Live Application tab.
-
Select Verify as the action and select Exists or Does Not Exist to verify the existence of the object.
Example verification step:
Copy codecom.hp.lft.report.Reporter.startReportingContext("Verify property: exists", com.hp.lft.report.ReportContextInfo.verificationMode());
Verify.isTrue(browser.describe(AiObject.class, new AiObjectDescription(com.hp.lft.sdk.ai.AiTypes.BUTTON, "ADD TO CART")).exists(), "Verification", "Verify property: exists");
com.hp.lft.report.Reporter.endReportingContext();
-
Click Copy to copy the step, then paste the step into your test.
The checkpoint passes if the application is in the expected conditions. Otherwise, a step failure is reported in the run results.
Inspect the next application page/screen
When you finish creating test steps for one page or screen in your application and you want to continue on another one, follow the steps below:
To inspect the next application page/screen
-
Browse to the desired location in the application.
-
In the AI Object Inspection window, click Re-inspect to load the new application page or screen and reinspect it.
-
If you don't reload and try to run a step from the AI Object Inspection window, the step runs based on the previous page’s inspection, resulting in an error or performing the operation on the new page.
-
If multiple remote access windows or browser windows are open, the inspection session interacts with only one.
-
If you need to perform steps on the application to prepare it for inspection, you can use the Delayed re-inspect.
Click the down arrow near Delayed re-inspect and set the required delay.
Click Go to begin the countdown, open the application, and perform steps such as hover or menu clicks to bring the application to the state you want to inspect.
When the delay timer expires, the application is re-inspected.
-
-
Add steps from the new page or screen to your test.
Run AI-based tests
After inspecting your application and creating test steps, run your AI-based test as you would run any other OpenText Functional Testing for Developers test. See Run tests.
You can run the same test on different operating systems and versions because it is not based on implementation details.
Troubleshooting
AI text identification requires the Windows mediaserver.exe service to be running. Otherwise the following may occur:
- AI Object Inspection cannot find objects By Text.
- An error message indicates that an error occurred when calling the Media Server OCR service.
Solution:
Open the Windows Services Manager and make sure the mediaserver.exe is running. Otherwise, start the service manually.
See also:
- Improve AI-based test object identification
- AI testing: Supported control types (in the OpenText Functional Testing Help Center)
- The Java, .NET, and JavaScript SDK references
- A sample AI-based Java test