Run manual and Gherkin tests

This topic describes how to plan test runs, and run them using the Manual Runner.

Plan a test run

Planning a test run enables you to organize your tests and run them without any further need to configure them. If you do not plan a test, you are prompted for the same details when you begin the run.

Tip: A common practice is to add tests to a test suite and run the entire test suite. You can add the same test to a suite multiple times, using different environments and run parameters. For details, see Plan and run test suites.

To plan test runs:

  1. In the Quality or Backlog module, open the Tests tab.
  2. Select one or multiple tests in the grid. You can include both manual and Gherkin tests.
  3. Click the Plan Run button .
  4. Provide values to the fields in the Plan Run dialog box. You must select a release. You can also choose the release version. For details, see Run with latest version.
  5. Click Plan. Your configuration is stored, allowing you to run the test in the future without having to configure it. For details, see Run tests.

Back to top

How tests are run

Depending on the test type, the Manual Runner runs tests differently.

Test type Description
Manual When running a manual test, view the test steps and add details on the steps. For each validation step, assign a run status. After the test run, a compiled status is created.
Gherkin When running a Gherkin test, view scenarios. For each scenario, assign a run status and assign one for the test.

Note: The run statuses you assign when running manual tests (including Gherkin tests and test suites) are called native statuses. When analyzing test run results in the Dashboard or Overview tabs, the native statuses are categorized into the summary statuses Passed, Failed, Requires attention, Skipped, and Planned for clarity of analysis. For details, see Test run statuses.

Back to top

Run tests

You run manual tests or Gherkin tests from the backlog or in the Quality module.

Tip: You can also run manual tests using:

  • Sprinter, the tool for designing and running manual tests. For details, see Run manual tests in Sprinter.

  • A custom manual runner, such as one created with Python in GitLab. For details, see the Idea Exchange.

To run tests:

  1. In the Quality or Backlog module, select the tests to run.

  2. In the toolbar, click the Run button .

  3. The Run tests dialog box opens, indicating how many tests you are about to run. If you select tests that have been planned, the Run tests dialog box indicates this.

    Specify the test run details.

    Field Details
    Run name

    A name for the manual run entity created when running a test.

    By default, the run name is the test name. Provide a different name if necessary.

    Release

    The product release version to which the test run is associated.

    The release is set by default according to the following hierarchy:

    • The release in the context filter, provided a single release is selected, and the release is active. If this does not apply:
    • The release in the sidebar filter, provided a single release is selected, and the release is active. If this does not apply:
    • The last selected release, provided the release is active. If this does not apply:
    • The current default release.
    Milestone

    (Optional) When you select a milestone, it means that the run contributes to the quality status of the milestone. A unique last run is created and assigned to the milestone for coverage and tracking purposes, similar to release.

    Sprint

    (Optional) When you select a sprint, it means that the run is planned to be run in the time frame of the sprint. Sprints do not have their own unique run results, so filtering Last Runs by Sprint returns only runs that were not overridden in a later sprint.

    To see full run status and progress reports based on sprints, use the Test run history (manual) item type instead of using Last Runs in the Dashboard widgets.

    Backlog Coverage

    (Optional) The backlog items that the run covers. For details, see Test specific backlog items.

    Program

    (Optional) If you are working with programs, you can select a program to associate with the test run. For details, see Programs.

    When you run a test associated with a program, you can select a release connected to the program, or a release that is not associated with any program.

    The program is set by default according to the following hierarchy:

    • Business rule population. If this does not apply:

    • The program in the context filter, provided a single program is selected or only one program exists. If this does not apply:

    • The program populated in the test.

    Note: After you run tests, the Tests tab shows the tests that are connected to the program selected in the context filter, or with runs that ran on this program.

    Environment tags

    The environment tags to associate with the test run.

    By default, the last selected environment tags are used.

    Select other environment tags as necessary.

    Script version

    Select Use a version from another release to run a test using a script version from a different release. Any tests linked to a manual test from an Add Call Step use the same specified version.

    Note:

    • If there are multiple versions in the selected release, the latest version is used in the release.

    • If there is no version in the selected release, the absolute latest version is used.

    Data Set

    Select the data set to use when running the test. When you run a test with a data set, the test is run in multiple iterations, with each iteration using different values from the data set in place of predefined parameters. For details, see Use parameters in tests.

    When running manual tests with multiple iterations, you can generate iteration-level reports, so that each iteration is counted separately as a run. For details, see Generate iteration-level reports.

    Draft run

    Select if you are still designing the test but want to run existing steps. After the run is complete, results from the draft run are ignored.

  4. Click Let's Run! to run the tests.

    The Manual Runner opens.

    Tip: During the test run, click the Run Details button to add run details.

    You can switch between the Manual Runner views.

    View Description
    Vertical view Displays the test steps as a list. The notes you make appear below each step.
    Side-by-side view Displays the test steps in the first column, and the notes you make appear in the next column.
  5. Perform the steps as described in the step or scenario. For each step or scenario, record your results in the What actually happened? field.

  6. Where relevant, open the Run Status list and assign statuses to the steps.

    The status of the test is automatically updated according to the status you assigned to the steps. You can override this status manually.

  7. If you stop a test in the middle and then click Run again, this continues the existing run and does not reset previous statuses. If you need to start from scratch, duplicate the run and run the new run instead.

Modify a run's test type

By default, a run inherits its Test Type value from the test or suite's test type.

You can modify this value for the run, so that, for example, if a test is End to End and you execute it as part of a regression cycle, you can choose Regression as the run's test type.

Note: The ability to modify a run's test type is defined by the site parameter EDITABLE_TEST_TYPE_FOR_RUN (default true). For details, see Configuration parameters.

Back to top

Report defects during a run

If you encounter problems when running a test, open defects or link existing defects.

To add defects during a test run:

  1. In the Manual Runner toolbar, click the Test Defects button . If no defects were assigned to this run, clicking Test Defects opens the Add run defect dialog box.

    To associate the defect with a specific step, hover over a step and click the Add Defect button .

  2. In the Add run step defect dialog box, enter the relevant details for the defect.

    Button Details
    Link to defect Select the related existing defects, and click Select.
    Attach Browse for attachments. For details, see Add attachments to a run or step.
    Copy steps and attachments

    Click to add the test run steps and attachments to the defect's Description field. This helps you reproduce the problem when handling the defect.

    • In a manual test, if the defect is connected to a run, all the run's steps are copied. If the defect is connected to a run step, the steps preceding the failed step are copied. If there are parameters in the test, only the relevant iteration is copied.

    • In a Gherkin test, the relevant scenario is copied.

    / Customize fields Select the fields to show. A colored circle indicates that the fields have been customized. For details, see Forms and templates.
  3. Several fields in the new defect form are automatically populated with values from the run and test, such as Application module, Environment tags, Milestone, and Sprint. For details, see Auto-populated fields.

  4. Click the Test Defects button to view a list of the defects for this run and test.

Back to top

Add attachments to a run or step

It is sometimes useful to attach documentation to the run.

To attach items to a run or a step in a run:

  1. Do one of the following, based on where you need to attach information.

    Attach to Description

    Test run

    In the toolbar, click the Run Attachments button .

    Single step

     In the step, click the Add attachments button .

  2. In the dialog, attach the necessary files.

    Tip: Drag and drop from a folder, or paste an image into the Manual Runner.

    The Manual Runner is updated to show the number of attached files.

Back to top

Edit run step descriptions during execution

You can edit run step descriptions during the test run, and choose whether to apply changes to the test as well.

To edit descriptions during test execution:

  1. In the toolbar, click the Edit Description button .

  2. Edit the description as needed, and then continue with the test execution.

  3. When you click Stop or move between test runs in a suite, a dialog enables you to save the edited description to future runs, or only apply them to the current run.

If a test contains parameters, or a step is called from a different test, you cannot save the edited description to the test. In this case, an alert icon indicates that the edited description applies to the current run only.

Similarly, you can edit a run step description during Gherkin test execution, but the modification cannot be saved to the test. It is only applied to the current run.

Back to top

End the run and view its results

After performing all steps in the test, stop the run and view its results.

To end the run and view results:

  1. In the Manual Runner window, click the Stop run button .

    Note: If your workspace is set up to automatically measure the duration of a test run, the run timer stops when you click the stop button or close the Manual Runner window.

    If you exit the system without stopping the run, the test run duration is not measured correctly.

  2. If you have not marked all steps or validation steps, the run status is still listed as In Progress. Finish entering and updating step details to change the status.

  3. In the test's Runs tab, click the link for your specific run and view the results in the Report tab.

    The report displays step-by-step results, including the results of all validation steps, and any attachments included in the run.

    The report displays up to 1000 test steps. Use the Rest API to retrieve more records. For details, see Create a manual test run report.

Back to top

Run timer

You can measure the duration of manual or Gherkin test runs automatically. The timer starts when the Manual Runner window is launched, and stops when the run ends of the Manual Runner window is closed.

The timing is stored in the test run's Duration field.

The automatic timing option depends on a site or shared space parameter: AUTOMATIC_TIMING_OF_MANUAL_RUN_DURATIONS_TOGGLE. For details, see Configuration parameters.

Back to top

Edit a completed run

Depending on your role, you have different options for editing completed manual test runs.

Roles Details
Roles with the Modify Completed Run permission

You can edit a completed run at any stage.

Roles without the Modify Completed Run permission

You can edit a completed run while the Manual Runner is still open. One of the following conditions needs to be met:

  • After completing the run, you remain signed in to the system, and the Manual Runner is open.
  • After completing the run, you have signed out and then signed back in again with the same credentials. The Manual Runner remains open.

    In this case, you can edit the run within 30 minutes after it was completed. Your admin can change this period by modifying the CONTINUE_RUN_WITHOUT_PERMISSION_GRACE_PERIOD parameter. For details, see Configuration parameters.

Note:

  • The parameter applies only when you sign out after completing the run. If you remain signed in, and the Manual Runner is still open, you can modify a completed run regardless of the grace period.

  • If the Manual Runner is closed, you cannot reopen it to modify the run, regardless of the grace period.
  • You cannot modify a completed run neither from the Runs grid, nor from an individual run's document view. The run fields become read-only.

Back to top

Last runs

You can view the last runs for each manual or Gherkin test. The information from the last run is the most relevant and can be used to analyze the quality of the areas covered by the test.

To view last runs:

  1. In the Tests tab of Requirements, Features, User Stories, or Application Modules, display the Last runs field.

    The Last runs field contains a color bar where each color represents the statuses of the last runs.

  2. Hover over the bar to view a summary of the test run statuses.

  3. Click the View runs link to open the details of the last runs.

    The Last runs field aggregates the last runs of the test in each unique combination of: Release, Milestone, Program, and Environment tags.

    Example: If a test was run on Chrome in Milestone A and Milestone B, and on Firefox in Milestone A and Milestone B, the test's Last Runs field represents the last run for each combination; four runs in total.

Definition of last and planned run

The following are definitions of last and planned run.

Terminology Definition
Last run

The run that started latest.

Planned run

A planned run is counted as a separate last run, even if its Release, Milestone, Program, and Environment tags match those of a completed run.

The planned run is distinguished by its Planned status.

Last planned run

The planned run that was created latest.

Filtering last runs

You can filter the last runs according to these test run attributes: Release, Milestone, Program, Environment tags, Status, Run By. The Last Runs field then includes last runs from the selected filter criteria.

Depending on which Tests tab you are in, apply test run filters as in the table below.

View How to apply Last Run filters

Requirements > Tests

Backlog > Tests

Team Backlog > Tests

Quality > Tests

Use the Run filters in the Quick Filter sidebar.

Feature > Test

User Story > Tests

  1. Click the Advanced Filter button in the toolbar.

  2. Select the Last Runs field, and cross-filter by any of these fields: Release, Milestone, Program, Environment tags, Status, or Run By.

Quality > Tests If you select a program in the context filter, you can see Last Runs only for the selected program.

The Dashboard module has several widgets based on the last run, such as Last run status by browser and Last run status by application module. For details, see Dashboard.

Exceptions to the calculation of last runs

In some cases, Last runs ignores field filters that were applied to the runs. This applies to fields in the Tests tab's grids shown in the Quality and Requirement modules, and the related widgets.

This exception occurs when using an OR condition in cross field filters for runs, where only part of the OR conditions include such a cross filter.

Examples:

Cross filter Filter applied / not applied
(Last runs.Release: ...) OR (Last runs.Status: ...) Runs filters are applied.
(Last runs.Release: ...) OR (Author ...) Runs filters are not applied. This results in a larger set of runs.

Back to top

Run with latest version

When you plan a manual test run or test suite run (for manual tests), by default the test steps run are the ones that existed at the time of the planning. If the test steps changed, the new steps are ignored in the upcoming runs.

Using the Run with latest version field, you can use the latest version of the test.

To use the latest test versions in manual tests:

  1. In the manual test's page (MT) open the Details tab. To apply the settings to multiple tests, go to the Tests tab, for example in the Quality module, and select multiple tests.
  2. Click the Plan Run button .
  3. In the Plan Run dialog box, set the Run with latest version field to Yes. Click Plan.
  4. Open the test's Runs tab, and show the Run with latest version column to view the setting for each of the manual runs.

    Runs tab displaying the runs with latest version setting for each manual run.

    When a test is run and no longer in Planning status, the Run with latest version field is locked.

To use the latest test versions for a test suite:

  1. In the test suite's page (TS) open the Planning tab. To apply settings to multiple test suites, go to the Tests tab, for example in the Quality module, and select multiple test suites.
  2. Click the Plan Suite Run button .
  3. In the Plan Suite Run dialog box, set the Run with latest version field to Yes. This sets the default for all manual tests added to the test suite. Click Plan.
  4. Open the test's Suite Runs tab, and show the Run with latest version column to view the setting for each of the suite runs.

Back to top

Unified test runs grid

The Runs tab in the Quality module provides a unified grid of runs across all tests, both planned and run. Testing managers can filter the grid, and do bulk updates on runs across different tests.

Note: By default, the Runs tab is accessible to workspace administrators only. For details on allowing access to other roles, see Roles and permissions.

Display limits

The Runs grid shows the first 2,000 runs in the filter, according to the sort-by field. If the filter definition results in a larger number of matching runs, refine the filter to make sure that all the runs you are interested in are included.

For grids with over 2,000 runs:

  • You can sort the grid according to the following fields: ID, Started, Creation time, and Latest pipeline run.

  • If you sort by any other field, the grid does not display any results.

  • No results are displayed if the grid is grouped.

Back to top

Test run statuses

Test runs have two status fields:

  • Native status. The test run's actual status. Can be any of the following values: Passed, Failed, Blocked, Skipped, In Progress, Planned

  • Status. A reduced list of test run statuses, used for widgets and coverage reporting. Can be any of the following values: Passed, Failed, Requires Attention, Skipped, Planned

The Status value is derived automatically from the Native status value. The two fields are mapped as follows.

"Native status" value Mapped "Status" value
Passed Passed
Failed Failed
Blocked Requires Attention
Skipped Skipped
In Progress Requires Attention
Planned Planned

Back to top

Generate iteration-level reports

You can set up dashboard widgets to report on manual runs at the iteration level, so that each iteration with parameters is counted separately as a run. Iteration-level reporting is available for the following widgets.

Widget Runs reported
Summary graphs

Test run history (manual)

Last test runs

Backlog's last test runs

Traceability graph

Backlog's last test runs

Test’s last test runs

Gherkin tests: Iteration-level reporting is not available for Gherkin tests, because a Gherkin test can have multiple scenario outlines with different sets of iterations. For iteration-level reporting, use BDD specifications instead. For details, see Create and run BDD scenarios.

To enable iteration-level reporting:

  • When configuring a supported widget, select the Count iterations as separate runs check box.

The status of an iteration is updated automatically based on the status you assign to the steps in that iteration when running the test. If you do not assign a status to the steps, the iteration status remains as Planned. For details, see Run tests.

Version compatibility

  • Runs created in versions earlier than 16.1.200 have an iteration count of 1.

  • Planned runs created in versions earlier than 23.4.24 have an iteration count of 1.

Back to top

Content field

Manual tests, Gherkin tests, and test suites include a field called Content. The meaning of this field is dependent on its context.

Test entity Content field indicates
Manual test or manual test run Number of steps (regardless of examples or iterations)
Gherkin test or BDD scenario Number of scenarios and scenario outlines
Test suite Number of tests
Suite run Number of runs

Back to top

See also: