Autonomous Tester
Requires an
Aviator license
Autonomous Tester runs manual tests at scale on the OpenText cloud, and provides detailed run result assets.
Overview
Add your manual tests to a test suite and configure them to run by Autonomous Tester. You can then run the suite on demand, or add it to a scheduled run.
Autonomous Tester runs manual tests, using Aviator to interpret the natural language steps, and determine how to perform them on the application. This enables you to continue using manual tests, without creating scripts or converting them to automated tests.
The run results provide detailed information about the run, including step outcomes and agent actions, as well as screen captures, and a screen recording of the whole run. This detailed information provides test run evidence, as well as the foundation for in‑depth analysis.
Note: You can use Cloudflare tunneling to access and test applications hosted on your local or private networks. For details, see To test local or private web applications.
Prepare tests for Autonomous Tester
Prepare your suite and tests in the Execution module so they can run autonomously.
To prepare tests for autonomous runs:
-
Create or open a test suite and add your manual tests.
-
For details on working with manual tests and test suites, see the Quality module in OpenText Core Software Delivery Platform Help Center.
-
For best practices when authoring manual tests for autonomous runs, see Writing tests for autonomous runs.
-
-
(Optional) In the test suite Details pane, select Use Knowledge Source. This instructs the Autonomous Tester to use relevant knowledge sources defined in the system when necessary.
Note: Knowledge sources are used during the run only if the information in the test was insufficient and additional context is required.
For more details on using knowledge sources in your tests, see Leveraging knowledge sources (early access).
-
In the test suite Planning tab, select one or more tests and click
Assign to Autonomous Tester.Note: If you do not select any tests, the assignment affects all of the tests in this suite.
In the assignment dialog box that opens:
-
If multiple Autonomous Tester runners are available, select the runner you want to use.
If no runner is available or if you want additional runners with other names, your space administrator can create new runners. For details, see Create an autonomous testing runner.
-
Click Configure.
The following run-related fields are updated in the selected tests:
Field Value Execution status Ready Run mode Automatically Test runner Autonomous Tester (or another Autonomous Tester you select) -
-
Click Run suite if you want to trigger the suite run now, or click Close to close the dialog box.
Run, track, and analyze autonomous test runs
After setting up your manual tests to run autonomously, you can run the suite by triggering it manually (Run Suite), planning a run (Plan Suite Run), or adding it to a scheduled run.
For details on running test suites, see the OpenText Core Software Delivery Platform Help Center.
To track and analyze the test runs:
-
During and after a run, you can track run progress in a test suite, in the Suite Runs tab, or in the Execution module, in the Scheduled Runs tab.
-
After the run ends, the manual run includes detailed run evidence and results.
In the Details tab, you can download screen captures and a screen recording of the entire run.
In the Report tab:
-
The run is labeled Ran by Autonomous Tester and displays each step's status, a step summary, and agent-generated details (AI Reasoning).
Tip: The Run by field in the Details tab reflects the user who triggered the run and not the Autonomous Tester.
-
If the test run used knowledge sources, a list of those Knowledge sources is available.
In addition, the steps display any Additional guidelines that were gleaned from the sources used for the run. For details, see Leveraging knowledge sources (early access).
-
-
Autonomous run results are incorporated into test and suite run results like any other test runs, enabling you to use all of the system's run analysis options.
For example, you can analyze autonomous manual test results with
Aviator at the test run and suite run levels.
For details, see Monitor test execution.
Leveraging knowledge sources (early access)
Using knowledge sources in Autonomous Tester is currently provided as an early-access feature.
Knowledge sources provide reusable context that helps Autonomous Tester resolve vague or incomplete manual test steps during a run. For example, log in instructions, environment-specific details, and product-specific guidance used across many tests.
For details on configuring knowledge sources in your system, see the OpenText Core Software Delivery Platform Help Center.
When you configure a knowledge source, you can specify the context in which it will be used. For example, you can specify purpose, entity type, and user.
When the Autonomous Tester needs additional information to carry out a test step, it consults the relevant knowledge sources. Up to 7 knowledge sources can be used.
The test run results specify how the knowledge sources helped the run:
-
A drop-down list of the Knowledge sources used.
-
Additional guidelines added to the steps in the report where necessary.
Consulting knowledge sources can be time-consuming. Therefore, we recommend using knowledge sources only during the test design phase. After you run the tests, check the results for additional guidelines adopted from the knowledge sources. Add the guidelines to your test steps, removing the need for the use of knowledge sources during the run. Then, deselect the Use Knowledge Source before running your tests regularly.
License usage
Each workspace is allowed up to 200 Autonomous Tester runs per month.
You can track the number of runs already used this month in the Current License Usage widget in the Execution Overview tab.
See also:

