Analyze failures in a pipeline run
If a pipeline run includes failed builds or failed automated test runs, open the pipeline run's Failure Analysis tab.
Here you can learn more about the failures, what may have caused them, and what application modules they affect.
Using the Failure Analysis tab, you can do the following and more:
Find SCM commits related to the failing test runs.
Make sure all failures are being investigated by specific users (Who's on it? field).
Hold committers accountable for the effects of their changes (Related users widget).
Identify problematic test runs, which have not been consistently successful over time.
Filter the display to include only the test runs that interest you.
Open the Pipelines module and select the Pipelines tab.
Select a pipeline from the list on the left.
Do one of the following:
To open the Failure Analysis tab of the latest pipeline run, click Failure Analysis .
To open a previous run, click the pipeline's ID to open the pipeline. In the Runs tab, click the ID of a specific run. Select the Failure Analysis tab.
To benefit from all of the analysis abilities ALM Octane offers, you must:
- Assign items to application modules
- Create rules to assign automated tests
- Use commit messages to associate SCM changes with ALM Octane stories. For details, see Track changes committed to your Source Control Management system.
The top of the Failure Analysis tab displays several widgets that offer insight into your build and product quality. For example:
Failed Test Runs.
Displays the total number of tests that failed in this pipeline run. This number is broken down into newly failed runs and runs that failed previously as well.
Click on the number of New or Still failing test runs to filter the Failed Test Runs tab accordingly.
Shows a breakdown of automated tests that have not been consistently successful, according to the type of problem. For more detail, see Identify problematic tests.
Click on the number of tests in a specific category to show only those tests in the Failed Test Runs tab.
Shows the people who need to be aware of the failures in this pipeline run:
People whose commits were included in this pipeline run. An additional indication is provided if a commit seems to be specifically related to the failure.
People assigned to investigate build or test run failures in this pipeline run. These are the users listed in the Who's on it? fields of the failures in this pipeline run. For more details, see Assign someone to investigate failures.
Click on someone in this widget to filter the list of failures. The lists displays only the failures where this person is in the Who's on it? field, or test run failures related to this person's commits.
You can learn more about each committer from its badge
This user is assigned in the Who's on it? field of a build or test run failure in this pipeline run.
This user's commits seem to be related to a build or test run failure. For more details, see View commits related to your pipeline run and to test failures.
This user responded Not me for this pipeline run. For more details, see Assign someone to investigate failures.
No badge. This user has not responded yet.
The Failed Builds tab lists all of the builds that failed as part of this pipeline run. Each failed build is the result of a specific pipeline step.
Tip: Some builds might not be included.
DevOps admins can configure hiding builds failures resulting from specific pipeline steps. For details, see Configure steps: Ignore or hide results of specific steps.
To check whether a step's builds are hidden, look at the step in the pipeline's Topology tab:
For each failed build, you can see the build number, duration, build run history , the percentage of failed tests in this build that are being handled , and so on.
Each failed build includes a link to the build log (also known as console log) on the CI server. If you are working with a Jenkins server, expand the failed build's row to see the build log link. This is a local copy of the build log, in which you can add rules that map specific log messages to build failure classifications. For details, see Classify build failures.
If configured, a custom report link to a build report stored externally is also available. For details on configuring custom build report links, see Label and configure pipeline steps.
Set the Who's on it? field to assign someone to investigate this build failure. This is useful in case the build failed for reasons that are not related to a specific test. For example, the environment may need attention, or the build may have failed during compilation. For details, see Assign someone to investigate failures.
A build failure classification label on a failed build indicates the type of issue that caused the failure. For example, Test code issue, Code issue, and Environment issues.
Use the build failure classification to determine who needs to handle the pipeline run failures and what issues need to be addressed.
You can classify builds manually to document the result of an investigation. For Jenkins builds, you can create rules that enable ALM Octane to automatically classify your builds based on the build logs. For details, see Classify build failures.
The build failure classification is also presented at the pipeline run level. You can see the number of failed builds in this pipeline run, broken down by type of failure.
To view only Build Failures tab with a specific build failure classification:
|What can I do?||How?|
Click in the build issues widget at the top of the Failure Analysis tab.
Click the number for a specific classification.
Add a cross-filter using the CI build. Build fail classification field.
To understand what are the most common reasons for your build failures over time:
Add the Pipeline build failure classification cumulative widget to your pipeline's dashboard. Hover over a specific run to see its classification details. This widget shows you whether your network or database is causing a lot of problems, tests are changing too often, code errors are being introduced and so on.
- By default, this widget is set to show builds from the past week and to hide Unclassified builds.
- The widget first opens in the Stacked view, showing the number of builds with each classification. In the widget, click Expanded to switch views and see the percentage of builds with each classification instead of the number.
The Failed Test Runs tab lists all of the test runs that failed as part of this pipeline run.
Group test runs by cluster to group together test runs that failed for a similar reason. ALM Octane clusters failed test runs as similar failures based on test package hierarchy, stack trace, exception, the job that ran the test, and the run in which the test started failing. (technical preview)
The number of clusters gives an idea of the number of problems that need fixing. This is a better indication than the number of failed test runs.
When analyzing failing tests in your pipeline run, start with the large clusters. Fixing one failed run in this cluster is likely to fix all failures in the cluster and significantly reduce the number of failures in the next run.
- If a pipeline run includes more than 100 failed test run, the runs are not clustered.
- In each pipeline run, clusters are named C1, C2, and so on, according to cluster size (Cluster C1 has the largest number of failed test runs). Test runs from different pipeline runs with the same cluster name, are not related to each other.
When assigning a failed run to someone for handling, you can assign all of the failures in the same cluster to the same person.
For each failed run you can find information such as:
Test name, class, and package.
The build number in which the test run failed.
The ID and name of the build job that ran the test.
The number of builds for which the test run has been failing consecutively.
Tip: If a test has been failing consecutively, it is probably better to analyze its first failure. To navigate to that failure, right-click on the test and select Go to pipeline run where the test first failed.
The test run history, in the Past status column. The tallest column represents the run you are currently looking at.
Any tags you added to the test or test run in ALM Octane.
Add columns to the grid to see information that is not displayed by default.
Filter the display to include only the test runs that interest you
|What can I do?||How?|
|View only runs of tests that you own.||Click My tests.|
View test runs whose failures are related to changes committed by selected users.
|Filter the test runs by Related committers.|
|Filter failures to view tests that are continuously failing, tests that failed for the first time in this pipeline run, or failures related to specific users.||Click different parts of the analysis widgets at the top of the page. For details, see Failure analysis widgets.|
Tip: Click Show filter pane to see which properties are included in the filter. This helps you understand which failures are being displayed and which are hidden.
For more details on analyzing failures in the Failed Test Runs tab, see Expand your test failure analysis.
Users can assign themselves or others to investigate a build or test run failure. ALM Octane provides information about each failure to help the investigation.
If you assign someone to a pipeline step's latest build or a test's latest run, the assignment remains on subsequent failures. The assignment is cleared automatically when the build or test run passes successfully.
To assign a user to investigate a failure
In the Failed Builds or Failed Test Runs tab, select one or more failures and do one of the following:
|What can I do?||How?|
|Assign yourself to the selected builds or test runs.||Click I'm on it|
Assign someone else to the selected builds or test runs.
Click Who's on it? and select a user.
Select whether to send an email to that user.
If you add a comment (test run only), it's included in the email.
|Clear the user from Who's on it?||Click Who's on it? and then click Clear Assignment.|
|Indicate that you completed your investigation and are not related to any of the failures in the pipeline run.||
Click Not me in the pipeline run's Failure Analysis tab.
This clears any Who's on it? assignment you had on any failed builds or test runs in this pipeline run.
Tip: When assigning someone to investigate a test run failure, assign the same person to all of the test runs that failed for a similar reason. Right-click a failed test run and select Show test run's cluster (similar failures) to view all similar failed runs and assign them in bulk.
(Test clustering is provided as a technical preview)
When a pipeline fails, you want someone to investigate as soon as possible, and get the pipeline back on track.
If your pipeline is configured to send pipeline failure notifications to all committers, you can require all committers to respond.
Each committer must open the failure analysis page, and use the information available there to determine whether their change caused the failure.
The CI owner should check as well, in case the failures are caused by an environment issue he or she is responsible for.
A user can respond with I'm on it for each relevant build or test run failure, or with Not me for the entire pipeline run.
Tip: If, as a user, you think you might be responsible for a failure or able to fix it, click I'm on it . You can clear the Who's on it? assignment or set Not me for the pipeline run if after further investigation you decide it's not your issue to fix after all.
As the pipeline owner, you can track responses in the Related Users widget in the pipeline run's Failure Analysis tab. For details, see Related Users.
Watch a video
A Day in The Life of a DevOps Developer
Select a test run failure and use the data in the right panel to investigate the failure.
Check the build context of the test run in the Preview tab
Next Runs. When investigating a failure of a test run that is not from latest pipeline run, a quick glance shows the test run status from the following pipeline runs . Tooltips on each run show pipeline run number and date.
If a subsequent run passed successfully, you probably do not need to invest more in this investigation.
Note: This is different from the widget on the test run itself, which displays the status of past runs .
Build . Displays the overall status of tests in this build. This provides a build context for you, while you are looking at a specific test run. For example, if most tests in the build are failing, the issue may be at the build level, and there is no need to analyze this specific failure.
Failure. The build failure classification. If the build failed for environment reasons, there is probably no need to analyze specific test failures.
Click the status icon to view more details:
Cluster: The number of additional tests that failed for the same reason. Click to view all similar failed runs together and handle them in bulk.
View error data from the failure, such as error message and stack trace, in the Report tab.
In the stack trace, you may see highlighted file names. These files may be related to the failure, as they contain changes committed in this build.
If available, view more detailed reports:
Click Testing tool report to open an external report with more details about the failure, from UFT or StormRunner Load, for example.
Click Custom build report to open a build report stored on your CI server or elsewhere. This link is provided if a DevOps admin configured a template for such links on the pipeline step. For details, see Label and configure pipeline steps.
The Commits tab in the right panel displays details about committed changes related to this failure. Commits most closely related to the failure are listed first.
Each commit includes a reason which explains why ALM Octane considers it related to the failure (for example, "error in stacktrace"). A tooltip on the reason indicates the degree of confidence in this association, so that the more ALM Octane is confident that the commit is related to the failure, the higher the percentage displayed in the tooltip. Commits with a higher degree of confidence are displayed higher on the list than those with lower confidence.
Tip: A dotted line around a committer indicates that the related commit is from a previous build.
A commit is considered to be related to a test run failure if one of the following conditions is met:
The commit includes changes to files listed in the test failure's stack trace.
The commit is related to a feature or application module that is assigned to the test.
Commits are related to features via the stories mentioned in the commit message.
Commits are related to application modules via the stories mentioned in the commit message, and the features that those stories belong to.
If this test has been failing consecutively, related commits include commits from this pipeline run and from the pipeline run in which the test started failing. For commits from a previous build, the build number is displayed.
You can filter the commits in this view:
- All commits (this pipeline run). All of the commits, not only the ones ALM Octane related to the selected test run failures.
- My commits (this pipeline run). All of your commits, not only the ones related to the selected test failures.
- Related commits (previous runs too) only commits related to the selected test run failures.
A risk icon indicates that a commit includes changes to hotspot files. For details, see Identify risky commits and features at risk.
Create a defect
If you are looking at a pipeline's latest run, you can create a defect related to one or more of the failed test runs:
Select the relevant rows in the grid and click Report Defect.
The new defect is linked to the runs you selected and contains a link to this pipeline run's Failure Analysis tab.
Add additional columns to the grid of failed test runs
|Add this column||to...|
navigate to the pipeline and to a particular run of the pipeline.
|Application modules||see which application modules are currently associated to the tests that failed.|
|Problem||see whether the test has a history of unsuccessful runs. For details, see Identify problematic tests.|
|Who's on it?||see who's investigating this failed run.|
see defects that are linked to the failed run.