Analyze test failures
In a specific pipeline run, use the Tests tab to learn more about automated tests that were part of the run. The following section describes how to use the tests analysis tools to pinpoint the source of tests failures.
In the Pipelines > Tests tab, you can see how many of your tests ran successfully, how many were skipped or unstable, and how many failed.
You can learn more about the failures, what may have caused them, and what application modules they affect.
After analyzing failures, you might decide to edit an automated test, roll back a committed change, create a defect for an area in your application, or even extend you project's timeline or reduce the planned content.
In the Tests tab, you can do the following and more:
Filter the display to include only the test runs that interest you.
Find SCM commits related to failing test runs.
Make sure all failures are being investigated by specific users (Assigned to field).
Hold committers accountable for the effects of their changes (Related users widget).
Identify problematic test runs, which have not been consistently successful over time.
Open the Pipelines module and select the Pipelines tab.
- Select a pipeline in the toolbar. Use the right and left arrows to scroll through the tabs.
- Hover over the Run box to select a run from the dropdown, or use the right and left arrows to navigate between the runs.
- Click the Tests tab to see the tests in the selected pipeline run.
To benefit from all of the analysis abilities ALM Octane offers, you must:
- Assign items to application modules
- Create test assignment rules
- Use commit messages to associate SCM changes with ALM Octane stories. For details, see Track commits to your SCM system.
The Tests tab lists all of the test runs that ran as part of this pipeline run.
Analyze the test run details
For each run you can find information such as:
Test name, class, and package.
The build number in which the test ran.
The ID and name of the build job that ran the test.
Failure age: The number of builds for which the test run has been failing consecutively.
Tip: If a test has been failing consecutively, it is probably better to analyze its first failure. To navigate to that failure, right-click on the test and select Go to pipeline run where the test first failed.
The test run history, in the Past status column . The blue dot beneath the run indicates the one being viewed.
A tooltip on Past status shows the 5 latest runs of this test, across all pipelines. To view all the latest runs of a test across pipelines, select the test and click Show latest runs on all pipelines in the Preview pane.
Any tags you added to the test or test run in ALM Octane.
Add columns to the grid to see information that is not displayed by default.
You can navigate between runs to help troubleshoot problems, using the Run selector in the upper right corner. Note that the Overview tab will always show the latest run.
Apply a filter
Filter the display to include only the test runs that interest you:
Click the Assigned to me button to filter tests already assigned to you.
Use the Filter panel on the right to quickly focus on specific issues that are of interest to you, such as failed test runs or regression problems.
Use the toolbar filter options:
|What can I do?||How?|
Separately address unstable, skipped, or failed test runs.
|Filter by Status.|
|View only runs of tests that you own.||Click My tests.|
View test runs whose failures are related to changes committed by selected users.
|Filter the test runs by Related committers.|
Separate newly failing tests from ones that failed in previous pipeline runs as well.
|Filter by New Failure.|
Tip: Click Show filter pane to see which properties are included in the filter. This helps you understand which failures are being displayed and which are hidden.
Use clustering to analyze test run failures
Group test runs by cluster to group together failed test runs that failed for a similar reason. ALM Octane clusters failed test runs as similar failures based on test package hierarchy, stack trace, exception, the job that ran the test, and the run in which the test started failing.
To see your tests grouped by clusters, click Show clusters in the Tests tab. To clear this grouping, click the Group By button and clear all groups.
The number of clusters gives an idea of the number of problems that need fixing. This is a better indication than the number of failed test runs.
When analyzing failing tests in your pipeline run, start with the large clusters. Fixing one failed run in this cluster is likely to fix all failures in the cluster and significantly reduce the number of failures in the next run.
Admins can define the number of tests to be used as a threshold for test clustering analysis using the CLUSTERING_MAX_TESTS_THRESHOLD site configuration parameter. If the number of test failures specified in this parameter is exceeded during a pipeline run, ALM Octane does not analyze clustering. For details, see CLUSTERING_MAX_TESTS_THRESHOLD.
In each pipeline run, clusters are named C1, C2, ... Cn, starting with the cluster with the highest number of failed test runs. Test runs from different pipeline runs with the same cluster name are not related to each other.
When assigning a failed run to someone for handling, you can assign all of the failures in the same cluster to the same person.
For more details on analyzing failures in the Test Runs tab, see Expand your test failure analysis.
Assign tests to application modules
You can assign automated tests to application modules to view the test results in context. You can then use these results to analyze the progress and quality of your release and product.
Select one or more tests in the Tests tab, and click Assign to Application Module. Note that this assigns the tests themselves, and not the test run.
Historic automated runs store the owner and application module information from the time of the initial injection. Assigning a different owner to an existing test will only affect new run instances of this test, but not former runs.
For details on working with application modules, see Quality.
Users can assign themselves or others to investigate a build or test run failure. ALM Octane provides information about each failure to help the investigation.
If you assign someone to a pipeline step's latest build or a test's latest run, the assignment remains on subsequent failures. The assignment is cleared automatically when the build or test run passes successfully.
To assign a user to investigate a failure
In the Builds or Test Runs tab, select one or more failed builds or failed test runs and do one of the following:
|What can I do?||How?|
|Assign yourself to the selected builds or test runs.||Click I'm on it
Assign someone else to the selected builds or test runs.
Click Assign to and select a user. At the top of the list, ALM Octane suggests recommended users based on their recent commits, areas of expertise, or previous activity.
Select whether to send an email to that user.
If you add a comment (test run only), it's included in the email.
|Clear a user from assignment||Click Assigned to and then click Clear Assignment.|
|Indicate that you completed your investigation and are not related to any of the failures in the pipeline run.||
Click Not me in the upper right corner, near the Run number.
This clears any Assigned to assignment you had on any failed builds or test runs in this pipeline run.
Note that Not Me is available in the Builds and Tests tabs on the main Pipeline landing page, but not in the Pipeline run form.
Tip: When assigning someone to investigate a test run failure, ALM Octane lets you choose to assign the same person to all of the test runs that failed for a similar reason.
When a pipeline fails, you want someone to investigate as soon as possible, and get the pipeline back on track.
If your pipeline is configured to send pipeline failure notifications to all committers, you can require all committers to respond.
Each committer must open the Tests page, and use the information available there to determine whether their change caused the failure.
The CI owner should check as well, in case the failures are caused by an environment issue he or she is responsible for.
A user can respond with I'm on it for each relevant build or test run failure, or with Not me for the entire pipeline run.
Tip: If, as a user, you think you might be responsible for a failure or able to fix it, click I'm on it . You can clear the Assigned to assignment or set Not me for the pipeline run if after further investigation you decide it's not your issue to fix after all.
Select a test run and use the data in the right panel to investigate the failure.
Check the build context of the test run in the Preview tab
For example, you can drill down to the following details:
When investigating a failure of a test run that is not from latest pipeline run, a quick glance shows the test run status from other pipeline runs . Hover over a bar to show information about the pipeline, such as the run number, duration, and date.
If a subsequent run passed successfully, you probably do not need to invest more in this investigation.
Note: This is different from the widget on the test run itself, which displays the status of past runs .
|Build||Displays the overall status of tests in this build. This provides a build context for you, while you are looking at a specific test run. For example, if most tests in the build are failing, the issue may be at the build level, and there is no need to analyze this specific failure.|
|Cluster||The number of additional tests that failed for the same reason. Click to view all similar failed runs together and handle them in bulk.|
View error data from the failure, such as error message and stack trace, in the Report tab.
In the stack trace, you may see highlighted file names. These files may be related to the failure, as they contain changes committed in this build.
If available, view more detailed reports:
Click Testing tool report to open an external report with more details about the failure, from UFT One or LoadRunner Cloud, for example.
Click Open build report or Open test run report to open a report stored on your CI server or elsewhere. This link is provided if a template was configured by an admin for such links on the pipeline step. For details, see Label pipeline steps by job type.
View users who are assigned, recommended, or related to a pipeline run and test failures
The Related Users tab in the right panel helps you decide which users are most relevant to handle issues in a selected pipeline run or test failure, depending on the context.
If you have not selected a test, you see all the commits which are related to the pipeline run. When you select a test, you see two areas:
The upper pane provides details of users who are currently assigned to handle the test failure, or who are recommended by ALM Octane for investigating the issue. For example, a user may have clicked I'm on it on a failed test, handled a similar failure in a previous run, or they may have changed a file that was found in the run’s stack trace.
You can clear assignments or assign recommended users by clicking the corresponding buttons. ALM Octane will continually learn how to improve its recommendations, based on your feedback.
The lower pane provides details of other related users and their relevant commits grouped by users. In each instance, ALM Octane displays the reason why a user or a commit is related to the current pipeline run or test. The commits are listed in the order of their relevance for the selected test.
In the lower pane you can filter information based on file name, commit message, or user name to help you locate relevant information easily.
If a test has been failing consecutively, related commits include commits from this pipeline run and from the pipeline run in which the test started failing. For commits from a previous build, the build number is displayed.
A risk icon indicates that a commit includes changes to hotspot files. For details, see Identify hotspots in your code.
Tip: You can configure ALM Octane to automatically assign recommended users after a certain number of consecutive failures. This can be done per pipeline step, and requires related permissions.
- In the Pipelines tab, click on a pipeline. Click the horizontal ellipsis and click View in the dropdown. The Pipeline PL pages opens.
- Select the Topology tab.
Click the Configuration button on a pipeline step, and open the Auto-assignment tab.
Select the checkbox to enable auto-assignment, and specify the number of failures required before a user is automatically assigned.
When a user is automatically assigned to a failed run, they receive an email with the relevant details.
For details, see Explore a pipeline's graphical representation.
Track test behavior across all pipelines
When you only look at one specific execution context, it can be difficult to analyze failures. Is a test unstable? Is it a regression in this release only? Is it failing in a particular pipeline configuration?
Test failure analysis is typically done in the context of a specific pipeline. However, you may be running similar tests in multiple parallel pipelines. For example, you might have one pipeline for each environment configuration, or a pipeline for each release, or a quick pipeline for sanity testing and longer pipelines with additional tests.
To see the latest runs of a particular test across all pipelines, select the test and click Show latest runs on all pipelines in the Preview pane. This will give you a more comprehensive view of where the test is failing, and where you need to focus attention when analyzing test failures.
Find tests that failed continuously in all pipeline runs
Suppose you committed a change yesterday and the pipeline run status was unstable, and you want to see if you are responsible for the failure.
Within the Tests tab, add the column Failing in all future runs, which shows if a test has failed continuously in all pipeline runs after your push. If a test succeeded once since the pipeline run when you committed the change, you’re probably not responsible for this failure. If it has continuously failed, you know you need to check your commit for problems.
The value of the Failing in all future runs field is calculated relative to the pipeline run that you are currently looking at. You can filter test runs by this field, to reduce the number of relevant runs that should be investigated. We recommend using this together with the New failures filter, to greatly reduce the number of tests you need to investigate.
Note: This field requires that the default Elasticsearch scripting settings be enabled.
Create a defect
If you are looking at a pipeline's latest run, you can create a defect related to one or more of the failed test runs:
Select the relevant rows in the grid and click Report Defect.
The new defect is linked to the runs you selected and contains a link to this pipeline run's Tests tab.
Add additional columns to the grid of failed test runs
|Add this column||to...|
navigate to the pipeline and to a particular run of the pipeline.
|Application modules||see which application modules are currently associated to the tests that failed.|
|Problem||see whether the test has a history of unsuccessful runs. For details, see Identify problematic tests.|
|Assigned to||see who's investigating this failed run.|
see defects that are linked to the failed run.