Plan and run manual and Gherkin tests

This topic describes how to plan test runs, and run them within ALM Octane using the Manual Runner.

Plan a test run

Planning a test run is optional. It enables you to organize your tests and run them without any further need to configure them. If you do not plan a test, ALM Octane prompts you for the same details when you begin the run.

Tip: A common practice is to add tests to a test suite and run the entire test suite. You can add the same test to a suite multiple times, using different environments or execution parameters. For details, see Plan and run test suites.

To plan test runs:

  1. Go to Tests in the Quality or Backlog module.
  2. Select the check box for one or multiple tests in the grid. You can include both manual and Gherkin tests.
  3. Click Plan Run .
  4. Provide values to the fields in the Plan Run dialog box. You must select a release. You can also choose the release version. For details, see Run with latest version.
  5. Click Plan. ALM Octane stores your configuration, allowing you to run the test in the future without having to configure it. For details, see Run tests.

Back to top

How ALM Octane runs tests

Depending on the test type, the Manual Runner runs tests differently:

  • When running a manual test, view the test steps and add details on the steps. For each validation step, assign a run status. After the test run, ALM Octane creates a compiled status.
  • When running a Gherkin test, view scenarios. For each scenario, assign a run status and assign one for the test.

Note: The run statuses you assign when running manual tests (including Gherkin tests and test suites) are called native statuses. When analyzing test run results in the Dashboard or Overview tabs, ALM Octane categorizes the native statuses into the summary statuses Passed, Failed, Requires attention, Skipped, and Planned for clarity of analysis.

Back to top

Run tests

Run manual tests or Gherkin tests from ALM Octane.

Tip: You can also run manual tests using Sprinter, as described in Run and edit manual tests in Sprinter.

To run tests:

  1. In the Backlog or in the Quality module, select the tests to run.

  2. In the toolbar, click Run .

    The Run tests dialog box opens, indicating how many tests you are about to run. If you select tests that have been planned, the Run tests dialog box will indicate this. Specify the test run details:

    Field Details
    Run name

    A name for the manual run entity created when running a test.

    By default, the run name is the test name. Provide a different name if necessary.

    Release

    The product release version to which the test run will be associated.

    The release is set by default according to the following hierarchy:

    • The release in the context filter, provided a single release is selected, and the release is active. If this does not apply:
    • The release in the sidebar filter, provided a single release is selected, and the release is active. If this does not apply:
    • The last selected release, provided the release active. If this does not apply:
    • The current default release.
    Milestone

    (Optional) When you select a milestone, it means that the run contributes to the quality status of the milestone. A unique last run will be created and assigned to the milestone for coverage and tracking purposes, similar to release.

    Sprint

    (Optional) When you select a sprint, it means that the run is planned to be executed in the time frame of the sprint. Sprints do not have their own unique run results, so filtering Last Runs by Sprint will only return runs that were not overridden in a later sprint.

    To see full run status and progress reports based on sprints, use the Test run history (manual) item type instead of using Last Runs in the Dashboard widgets.

    Backlog Coverage

    (Optional) The backlog items that the run will cover. For details, see Test specific backlog items.

    Program

    (Optional) If you are working with programs, you can select a program to associate with the test run. For details, see Programs (Enterprise Edition).

    When you run a test associated with a program, you can select a release connected to the program, or a release that is not associated with any program.

    The program is set by default according to the following hierarchy:

    • Business rule population. If this does not apply:
    • The program in the context filter, provided a single program is selected or only one program exists. If this does not apply:
    • The program populated in the test.

    Note: After you run tests, the Tests tab shows the tests that are connected to the program selected in the context filter, or with runs that ran on this program.

    Environment

    The environments on which to run the test.

    By default, ALM Octane uses the last selected environments.

    Select other environments as necessary.

    Script version

    Select Use a version from another release to run a test using a script version from a different release. If there is more than one version from the selected release, ALM Octane uses the latest version.

    Any tests linked to a manual test from a Add Call Step use the same specified version.

    Draft run

    Select Draft run if you are still designing the test but want to run existing steps. After the run is complete, ALM Octane ignores the results.

  3. Click Let's Run! to run the tests.

    The Manual Runner opens.

    Tip: During the test run, click the Run Details button to add run details.

    Using the buttons in the upper-right corner, you can switch between the following Manual Runner views:

    View Description
    Vertical view Displays the test steps as a list. The notes you make appear below each step.
    Side-by-side view Displays the test steps in the left column. The notes you make appear in the right column.
  4. Perform the steps as described in the step or scenario. For each step or scenario, record your results in the What actually happened? field:

  5. Where relevant, assign statuses to the steps.

    The status of the test is automatically updated according to the status you assigned to the steps. You can override this status manually.

Modify a run's test type

By default, a run inherits its Test Type value from the test or suite's test type.

You can modify this value for the run, so that, for example, if a test is End to End and you execute it as part of a regression cycle, you can choose Regression as the run's test type.

Note: The ability to modify a run's test type is defined by the site parameter EDITABLE_TEST_TYPE_FOR_RUN (default true).

Back to top

Report defects during a run

If you encounter problems when running a test, open defects or link existing defects.

To add defects during a test run:

  1. In the Manual Runner toolbar, click the Test Defects button . If no defects were assigned to this run, clicking Test Defects opens the Add run defect dialog box.

    If you want to associate the defect with a specific step, hover over it and click Add Defect to open the Add run step defect dialog box.

  2. In the Add run (step) defect dialog box, fill in the relevant details for the defect. Use the buttons on the ribbon to add more information to the defect:

    Button Details
    Link to defect Select the related existing defect(s), and click Select.
    Attach Browse for attachments. For details, see Add attachments to a run or step.
    Copy steps to description

    Click to add the test run steps to the defect's Description field. This helps you reproduce the problem when handling the defect.

    • In a manual test, if the defect is connected to a run, all the run's steps are copied. If the defect is connected to a run step, the steps preceding the failed step are copied. If there are parameters in the test, only the relevant iteration is copied.

    • In a Gherkin test, the relevant scenario is copied.

    / Customize fields Select the fields to show. A colored circle indicates that the fields have been customized. For details, see Customize forms.
  3. Several fields in the new defect form are automatically populated with values from the run and test, such as Application module, Environment, Milestone, and Sprint.

    Note: If not set, the Release field of a new defect is automatically populated with the Detected in release value when the defect is saved. This behavior is governed by an out-of-the-box business rule that can be deactivated by the space admin. For details, see Design business rules.

    For more information, see Auto-populated content. When you are done populating the fields, click Add.

  4. Click Test Defects to view a list of the defects for this run and test.

Back to top

Add attachments to a run or step

It is sometimes useful to attach documentation to the run.

To attach items to a run or a step in a run:

  1. Do one of the following, based on where you need to attach information:

    • To attach to a test run: In the toolbar, click the Run Attachments button .

    • To attach to single step: In the step, next to the What actually happened field, hover and click the Add attachments button:

    In the dialog, attach the necessary files.

    Tip: Drag and drop from a folder or paste an image into the Manual Runner.

ALM Octane updates the Manual Runner to show the number of attached files:

Back to top

Edit run step descriptions during execution

You can edit run step descriptions during execution, and choose whether to apply changes to the test as well.

To edit descriptions during test execution:

  1. In the toolbar, click the Edit Description button .

  2. Edit the description as needed, and then continue with the test execution.

  3. When you click Stop or move between test runs in a suite, a dialog enables you to save the edited description to future runs, or only apply them to the current run.

If a test contains parameters, or a step is called from a different test, you cannot save the edited description to the test. In this case, an alert icon to the right of the description indicates that the edited description applies to the current run only.

Similarly, you can edit a run step description during Gherkin test execution, but the modification cannot be saved to the test. It will only apply to the current run.

Back to top

Auto-populated content

When you create a new defect from within the Manual Runner, ALM Octane automatically populates several fields from the run and test, such as the Application module, Environment, Milestone, and Sprint.

Included in the auto-populated fields are Backlog items, Requirement, and Feature according to these guidelines:

  • Backlog items reflecting all open user stories (where the meta phase is not set to Done) linked to the test.
  • All requirements directly linked to the test.
  • A feature linked to the test, provided that the test is linked directly or indirectly to a single open feature (where the meta phase is not set to Done).

You can accept the automatically populated values or modify them as needed.

Note: Site admins can disable automatic population for:

  • Backlog items, Requirements, and Feature fields by setting the site parameter LINK_COVERAGE_ENTITIES_TO_RUN_DEFECT to false.

  • Milestone and Sprint fields by setting the site parameter COPY_SPRINT_AND_MILESTONE_FROM_RUN_TO_DEFECT to false.

For details, see Configuration parameters.

Some of the auto-populated fields are hidden by default. To show them, customize the view. The customized fields appear in the More section. For details, see Customize forms.

Back to top

End the run and view its results

After performing all steps, at the bottom of the ALM Octane Manual Runner window, click .

Run timer: If your workspace is set up to automatically measure the duration of a test run, the timer stops when you click the stop button, or close the Manual Runner window.

If you exit the system without stopping the run, the test run duration will be not be measured correctly.

If you have not marked all steps or validation steps, the run status is still listed as In Progress. Finish entering and updating step details to change the status.

In the Runs tab of the test, click the link for your specific manual run and view the results in the Report tab.

The report displays step-by-step results, including the result of all validation steps, and any attachments included in the run.

Note: The report displays up to 1000 test steps. You can use the Rest API to retrieve more records. For details, see Create a manual test run report.

Back to top

Last runs

You can view the Last runs for each manual or Gherkin test. The information from the last run is the most relevant and can be used to analyze the quality of the areas covered by the test.

To view last runs:

  1. In the Tests tab of Requirements, Features, User Stories, or Application Modules, display the Last runs field.

    The Last runs field contains a color bar where each color represents the statuses of the lasts runs.

  2. Hover over the bar to view a summary of the test run statuses:

  3. Click the View link to open the details of the last runs in a popup window.

The Last runs field aggregates the last runs of the test in each unique combination of: Release, Milestone, Program, and Environment.

Example: If a test was run on Chrome in Milestone A and Milestone B, and on Firefox in Milestone A and Milestone B, the test's Last Runs field will represent the last run for each combination, four runs in total.

The executed run that is considered last is the run that started latest.

A planned run is counted as a separate last run, even if its Release, Milestone, Program, and Environment match those of an executed run. The planned run is distinguished by its 'Planned' status.

The planned run that is considered last is the planned run that was created latest.

You can filter the last runs according to these test run attributes: Release, Milestone, Program, Environment, Status, Run By. The Last Runs field will then include last runs from the selected filter criteria.

Depending on which Tests tab you are in, apply test run filters as follows:

View How to apply Last Run filters
Requirements > Tests, Backlog > Tests, Team Backlog > Tests, Quality > Tests Use the Run filters in the right-hand Filter pane.
Feature > Tests, User Story > Tests Using the filter bar , select the Last Runs field, and cross-filter by any of these fields: Release, Milestone, Program, Environment, Status, Run By.
Quality > Tests If you select a program in the context filter, you will see Last Runs only for the selected program.

The Dashboard module has several widgets based on the last run, such as Last run status by browser and Last run status by application module. For details, see ALM Octane dashboard.

Back to top

Run with latest version

When you plan a manual test run or test suite run (for manual tests), by default the test steps executed are the ones that existed at the time of the planning. If the test steps changed, the new steps will be ignored in the upcoming runs.

Using the Run with latest version field, you can instruct ALM Octane to use the latest version of the test.

To use the latest test versions in manual tests:

  1. In the manual test's page (MT) click the Details tab. To apply the settings to multiple tests, go to the Tests tab, for example in the Quality module, and select multiple tests.
  2. Click Plan Run.
  3. In the Plan Run dialog box, select Yes in the Run with latest version field. Click Plan.
  4. Open the test's Runs tab, and show the column Run with latest version, to show the setting for each of the manual runs.

    When a test is executed and no longer in Planning status, the Run with latest version field is locked.

To use the latest test versions for a test suite:

  1. In the test suite's page (TS) click the Planning tab. To apply settings to multiple test suites, go to the Tests tab, for example in the Quality module, and select multiple test suites.
  2. Click Plan Suite Run.
  3. In the Plan Suite Run dialog box, select Yes in the Run with latest version field. This sets the default for all manual tests added to the test suite. Click Plan.
  4. Open the test's Suite Runs tab, and display the column Run with latest version to show the setting for each of the suite runs.

Back to top

High-level management of runs

The Runs tab in the Quality module provides a unified grid of runs across all tests, both planned and executed. Testing managers can filter the grid, and do bulk updates on runs across different tests.

Caution: By default, the grid is available only to workspace admins. If filters are not properly defined and result in a huge number of matching runs, usage of the Runs tab may lead to performance problems to all users. We therefore recommend allowing access to this tab to only a limited number of users.

Back to top

See also: