Run manual and Gherkin tests
Run tests within ALM Octane using the Manual Runner.
Depending on the type of test, the Manual Runner runs tests differently:
When running a manual test, view the test steps and add details on the steps. For each validation step, assign a run status. After the test run, ALM Octane creates a compiled status .
When running a Gherkin test, view scenarios. For each scenario, assign a run status and assign one for the test.
Note: The run statuses you assign when running manual tests (including Gherkin tests and test suites) are called native statuses. When analyzing test run results in the Dashboard or Overview tabs, ALM Octane categorizes the native statuses into the summary statuses Passed, Failed, Requires attention, Skipped, and Planned for clarity of analysis.
Run manual tests or Gherkin tests from ALM Octane.
Tip: You can also run manual tests using Sprinter, as described in Run and edit manual tests in Sprinter.
To run tests:
In the Backlog or in the Quality module, select the tests to run.
In the toolbar, click Run .
The Run tests dialog box opens, indicating how many tests you are about to run. Specify the test run details:
A name for the manual run entity created when running a test.
By default, the run name is the test name. Provide a different name if necessary.
The product release version to which the test run will be associated.
The release is set by default according to the following hierarchy:
- The release in the context filter, provided a single release is selected, and the release is active. If this does not apply:
- The release in the sidebar filter, provided a single release is selected, and the release is active. If this does not apply:
- The last selected release, provided the release active. If this does not apply:
- The current default release.
(Optional) When you select a milestone, it means that the run contributes to the quality status of the milestone. A unique last run will be created and assigned to the milestone for coverage and tracking purposes, similar to release.
(Optional) When you select a sprint, it means that the run is planned to be executed in the time frame of the sprint. Sprints do not have their own unique run results, so filtering Last Runs by Sprint will only return runs that were not overridden in a later sprint.
To see full run status and progress reports based on sprints, use the Test run history (manual) item type instead of using Last Runs in the Dashboard widgets.
(Optional) If you are working with programs, you can select a program to associate with the test run. For details, see Programs (Enterprise Edition).
When you run a test associated with a program, you can select a release connected to the program, or a release that is not associated with any program.
The program is set by default according to the following hierarchy:
- Business rule population. If this does not apply:
- The program in the context filter, provided a single program is selected or only one program exists. If this does not apply:
- The program populated in the test.
Note: After you run tests, the Tests tab shows the tests that are connected to the program selected in the context filter, or with runs that ran on this program.
The environments on which to run the test.
By default, ALM Octane uses the last selected environments.
Select other environments as necessary.
Select Use a version from another release to run a test using a script version from a different release. If there is more than one version from the selected release, ALM Octane uses the latest version.
Any tests linked to a manual test from a Add Call Step use the same specified version.
Select Draft run if you are still designing the test but want to run existing steps. After the run is complete, ALM Octane ignores the results.
Click Let's Run! to run the tests.
The Manual Runner opens.
Tip: During the test run, click the Run Details button to add run details.
Perform the steps as described in the step or scenario. For each step or scenario, record your results in the What actually happened? field:
Where relevant, assign statuses to the steps.
The status of the test is automatically updated according to the status you assigned the steps. You can override this status manually.
Modifying a run's test type:
By default, a run inherits its Test Type value from the test or suite's test type.
You can modify this value for the run, so that for example if a test is End to End and you execute it as part of a regression cycle, you can choose Regression as the run's test type.
Note: The ability to modify a run's test type is defined by the site parameter EDITABLE_TEST_TYPE_FOR_RUN (default true).
If you encounter problems when running a test, open defects or link existing defects.
To add defects during a test run:
In the Manual Runner toolbar, click the Test Defects button . If no defects were assigned to this run, clicking Test Defects opens the Add run defect dialog box.
If you want to associate the defect with a specific step, hover over it and click Add Defect to open the Add run step defect dialog box.
In the Add run (step) defect dialog box, fill in the relevant details for the defect. Use the buttons on the ribbon to add more information to the defect:
Link to defect Select the related existing defect(s), and click Select. Attach Browse for attachments. For details, see Add attachments to a run or step. Copy steps to description
Click to add the test run steps to the defect's Description field. This helps you reproduce the problem when handling the defect.
In a manual test, if the defect is connected to a run, all the run's steps are copied. If the defect is connected to a run step, the steps preceding the failed step are copied. If there are parameters in the test, only the relevant iteration is copied.
In a Gherkin test, the relevant scenario is copied.
/ Customize fields Select the fields to show. A colored circle indicates that the fields have been customized. For details, see Customize forms.
Several fields in the new defect form are automatically populated with values from the run and test, such as the application module and environment. For more information, see Auto-populated content. When you are done populating the fields, click Add.
Click Test Defects to view a list of the defects for this run and test.
It is sometimes useful to attach documentation to the run.
To attach items to a run or a step in a run:
Do one of the following, based on where you need to attach information:
To attach to a test run: In the toolbar, click the Run Attachments button .
To attach to single step: In the step, next to the What actually happened field, hover and click the Add attachments button:
In the dialog, attach the necessary files.
Tip: Drag and drop from a folder or paste an image into the Manual Runner.
ALM Octane updates the Manual Runner to show the number of attached files:
When you create a new defect from within the Manual Runner, ALM Octane automatically populates several fields from the run and test, such as the application module and environment.
Included in the auto-populated fields are Backlog items, Requirement, and Feature according to these guidelines:
- Backlog items reflecting all open user stories (where the meta phase is not set to Done) linked to the test.
- All requirements directly linked to the test.
- A feature linked to the test, provided that the test is linked directly or indirectly to a single open feature (where the meta phase is not set to Done).
You can accept the automatically populated values or modify them as needed.
Tip: Site administrators can disable automatic population for the Backlog items, Requirements, and Feature fields by setting the site parameter LINK_COVERAGE_ENTITIES_TO_RUN_DEFECT to false.
Some of the auto-populated fields are hidden by default. To show them, customize the view. The customized fields appear in the More section. For details, see Customize forms.
After performing all steps, at the bottom of the ALM Octane Manual Runner window, click .
Note: If you have not marked all steps or validation steps, the run status is still listed as In Progress. Finish entering and updating step details to change the status.
In the Runs tab of the test, click the link for your specific manual run and view the results in the Report tab.
The report displays step-by-step results, including the result of all validation steps, and any attachments included in the run.
Note: The report displays up to 1000 test steps. You can use the Rest API to retrieve more records. For details, see Create a manual test run report.
You can view the Last runs for each manual or Gherkin test. The information from the last run is the most relevant and can be used to analyze the quality of the areas covered by the test.
To view last runs:
In the Tests tab of Requirements, Features, User Stories, or Application Modules, display the Last runs field.
The Last runs field contains a color bar where each color represents the statuses of the lasts runs.
Hover over the bar to view a summary of the test run statuses:
- Click the View link to open the details of the last runs in a popup window.
The Last runs field aggregates the last runs of the test in each unique combination of: Release, Milestone, Program, and Environment.
For example, if a test was run on Chrome in Milestone A and Milestone B, and on Firefox in Milestone A and Milestone B, the test's Last Runs field will represent the last run for each combination, four runs in total.
The executed run that is considered last is the run that started latest.
A planned run is counted as a separate last run, even if its Release, Milestone, Program, and Environment match those of an executed run. The planned run is distinguished by its 'Planned' status.
The planned run that is considered last is the planned run that was created latest.
You can filter the last runs according to these test run attributes: Release, Milestone, Program, Environment, Status, Run By. The Last Runs field will then include last runs from the selected filter criteria.
Depending on which Tests tab you are in, apply test run filters as follows:
|View||How to apply Last Run filters|
|Requirements > Tests, Backlog > Tests, Team Backlog > Tests, Quality > Tests||Use the Run filters in the right-hand Filter pane.|
|Feature > Tests, User Story > Tests||Using the filter bar , select the Last Runs field, and cross-filter by any of these fields: Release, Milestone, Program, Environment, Status, Run By.|
|Quality > Tests||If you select a program in the context filter, you will see Last Runs only for the selected program.|
The Dashboard module has several widgets based on the last run, such as Last run status by browser and Last run status by application module. For details, see The dashboard.
When you plan a manual test run or test suite run (for manual tests), by default the test steps executed are the ones that existed at the time of the planning. If the test steps changed, the new steps will be ignored in the upcoming runs.
Using the Run with latest version field, you can instruct ALM Octane to use the latest version of the test.
To use the latest test versions in manual tests:
- In the manual test's page (MT) click the Details tab.
- Click Plan Run.
- In the Plan Run dialog box, select Yes in the Run with latest version field. Click Plan.
Open the test's Runs tab, and show the column Run with latest version, to show the setting for each of the manual runs.
When a test is executed and no longer in Planning status, the Run with latest version field is locked.
To use the latest test versions for a test suite:
- In the test suite's page (TS) click the Planning tab.
- Click Plan Suite Run.
- In the Plan Suite Run dialog box, select Yes in the Run with latest version field. This sets the default for all manual tests added to the test suite. Click Plan.
- Open the test's Suite Runs tab, and display the column Run with latest version to show the setting for each of the suite runs.
The Runs tab in the Quality module provides a unified grid of runs across all tests, both planned and executed. Testing managers can filter the grid, and do bulk updates on runs across different tests.
Caution: By default, the grid is available only to workspace admins. If filters are not properly defined and result in a huge number of matching runs, usage of the Runs tab may lead to performance problems to all users. We therefore recommend allowing access to this tab to only a limited number of users.