Plan and run manual tests
This topic describes how to plan and run manual test runs using the Manual Runner.
In this topic:
- Plan a test run
- Run tests
- Report defects during a run
- Add attachments to a run or step
- Edit run step descriptions
- End the run and view its results
- Test run statuses
Plan a test run
Planning a test run enables you to set up a list of tests to run in a given release and sprint. You can define the test's configurations, such as the script version, environment, and data set. Planned tests are saved as test runs in the Planned phase.
You can add additional configurations to the test run before it is executed.
To plan test runs:
- In the Quality or Backlog module, open the Tests tab.
- Select one or multiple tests in the grid.
- Click the Plan Run button
.
- In the Plan Run dialog box, fill in test run details, including Release.
-
Script version: By default, the test run uses the test script that is set as Latest version at the time that the test run is created.
-
To use a version of the script assigned to a release, in the Version from Release field, select a release. The latest of the versions assigned to the selected release is used.
-
By default, the test run uses the selected script version as it existed at the time that the test run is created. To use the script version as it exists at the time of the test run execution, set the Run with latest version field to Yes.
-
-
Fill in the following additional details for the planned test runs:
Field Details Run name A name for the manual run created when running a test.
By default, the run name is the test name. Provide a different name if necessary.
Release The release to which the test run is associated.
The release is set by default according to the following hierarchy:
- The release in the context filter, provided a single release is selected, and the release is active.
- The release in the sidebar filter, provided a single release is selected, and the release is active.
- The last selected release, provided the release is active.
- The current default release.
Milestone When you select a milestone, it means that the run contributes to the quality status of the milestone. A unique last run is created and assigned to the milestone for coverage and tracking purposes, similar to release.
Sprint When you select a sprint, it means that the run is planned to be run in the time frame of the sprint. Sprints do not have their own unique run results, so filtering Last Runs by Sprint returns only runs that were not overridden in a later sprint.
To see full run status and progress reports based on sprints, use the Test run history (manual) item type instead of using Last Runs in the Dashboard widgets.
Backlog Coverage The backlog items that the run covers. For details, see Test specific backlog items.
Program If you work with programs, you can select a program to associate with the test run. For details, see Programs.
When you run a test associated with a program, you can select a release connected to the program, or a release that is not associated with any program.
The program is set by default according to the following hierarchy:
-
Business rule population.
-
The program in the context filter, provided a single program is selected or only one program exists. If this does not apply:
-
The program populated in the test.
Note: After you run tests, the Tests tab shows the tests that are connected to the program selected in the context filter, or with runs that ran on this program.
Environment tags The environment tags to associate with the test run.
By default, the last selected environment tags are used.
Select other environment tags as necessary.
Data Set Select the data set to use when running the test. When you run a test with a data set, the test is run in multiple iterations, with each iteration using different values from the data set in place of predefined parameters. For details, see Use parameters in tests.
When running manual tests with multiple iterations, you can generate iteration-level reports, so that each iteration is counted separately as a run. For details, see Plan and run manual tests.
Draft run Select if you are still designing the test but want to run existing steps. After the run is complete, results from the draft run are ignored.
Run tests
You run manual tests from the backlog or in the Quality module.
Tip: You can also run manual tests using:
-
Sprinter, the tool for designing and running manual tests. For details, see Run manual tests in Sprinter.
-
A custom manual runner, such as one created with Python in GitLab. For details, see the Idea Exchange.
To run tests:
-
In the Quality or Backlog module, select the tests to run.
-
In the toolbar, click the Run button
.
-
The Run tests dialog box opens, indicating how many tests you are about to run.
Fill in any extra test configuration details that were not set at the test plan stage.
-
Click Let's Run to run the tests.
The Manual Runner opens.
You can switch between the Manual Runner views.
View Description Vertical view
Displays the test steps as a list. The notes you make appear below each step. Side-by-side view
Displays the test steps in the first column, and the notes you make appear in the next column. -
Perform the steps as described in the step or scenario. For each step or scenario, record your results in the What actually happened? field.
-
Where relevant, open the Run Status list
and assign statuses to the steps.
The status of the test is automatically updated according to the status you assigned to the steps. You can override this status manually.
-
If you stop a test in the middle and then click Run again, this continues the existing run and does not reset previous statuses. If you need to start from scratch, duplicate the run and run the new run instead.
Report defects during a run
If you encounter problems when running a test, open defects or link existing defects.
To add defects during a test run:
-
In the Manual Runner toolbar, click the Test Defects button
. If no defects were assigned to this run, clicking Test Defects opens the Add run defect dialog box.
To associate the defect with a specific step, hover over a step and click the Add Defect button
.
-
In the Add run step defect dialog box, enter the relevant details for the defect.
Button Details Link to defect
Select the related existing defects, and click Select. Attach
Browse for attachments. For details, see Add attachments to a run or step. Copy steps and attachments
Click to add the test run steps and attachments to the defect's Description field. This helps you reproduce the problem when handling the defect.
-
In a manual test, if the defect is connected to a run, all the run's steps are copied. If the defect is connected to a run step, the steps preceding the failed step are copied. If there are parameters in the test, only the relevant iteration is copied.
-
In a Gherkin test, the relevant scenario is copied.
/
Customize fields
Select the fields to show. A colored circle indicates that the fields have been customized. For details, see Forms and templates. -
-
Several fields in the new defect form are automatically populated with values from the run and test, such as Application module, Environment tags, Milestone, and Sprint. For details, see Auto-populated fields.
-
Click the Test Defects button
to view a list of the defects for this run and test.
Add attachments to a run or step
It is sometimes useful to attach documentation to the run.
To attach items to a run or a step in a run:
-
Do one of the following, based on where you need to attach information.
Attach to Description Test run
In the toolbar, click the Run Attachments button
.
Single step
In the step, click the Add attachments button
.
-
In the dialog, attach the necessary files.
Tip: Drag and drop from a folder, or paste an image into the Manual Runner.
The Manual Runner is updated to show the number of attached files.
Edit run step descriptions
You can edit run step descriptions during the test run, and choose whether to apply changes to the test as well.
To edit descriptions during a test run:
-
In the toolbar, click the Edit Description button
.
-
Edit the description as needed, and then continue with the test execution.
-
When you click Stop or move between test runs in a suite, a dialog enables you to save the edited description to future runs, or only apply them to the current run.
If a test contains parameters, or a step is called from a different test, you cannot save the edited description to the test. In this case, an alert icon indicates that the edited description applies to the current run only.
Similarly, you can edit a run step description during Gherkin test execution, but the modification cannot be saved to the test. It is only applied to the current run.
End the run and view its results
After performing all steps in the test, stop the run and view its results. You can also link runs to existing defects after running a test.
To end the run and view results:
-
In the Manual Runner window, click the Stop run button
.
Note: If your workspace is set up to automatically measure the duration of a test run, the run timer stops when you click the stop button or close the Manual Runner window.
If you exit the system without stopping the run, the test run duration is not measured correctly.
-
If you have not marked all steps or validation steps, the run status is still listed as In Progress. Finish entering and updating step details to change the status.
-
In the test's Runs tab, click the link for your specific run and view the results in the Report tab.
The report displays step-by-step results, including the results of all validation steps, and any attachments included in the run.
The report displays up to 1000 test steps. Use the Rest API to retrieve more records. For details, see Create a manual test run report.
Test run statuses
Test runs have two status fields:
-
Native status. The test run's actual status. Can be any of the following values: Passed, Failed, Blocked, Skipped, In Progress, Planned
-
Status. A reduced list of test run statuses, used for widgets and coverage reporting. Can be any of the following values: Passed, Failed, Requires Attention, Skipped, Planned
The Status value is derived automatically from the Native status value. The two fields are mapped as follows.
"Native status" value | Mapped "Status" value |
---|---|
Passed | Passed |
Failed | Failed |
Blocked | Requires Attention |
Skipped | Skipped |
In Progress | Requires Attention |
Planned | Planned |
See also: