Analyze tests

In a specific pipeline run, use the Tests tab to learn more about automated tests that were part of the run. The following section describes how to use the tests analysis tools to pinpoint the source of tests failures.

Overview

In the Pipelines > Tests tab, you can see how many of your tests ran successfully, how many were skipped or unstable, and how many failed.

You can learn more about the failures, what may have caused them, and what application modules they affect.

After analyzing failures, you might decide to edit an automated test, roll back a committed change, create a defect for an area in your application, or even extend you project's timeline or reduce the planned content.

In the Tests tab, you can do the following and more:

  • Filter the display to include only the test runs that interest you.

  • Find SCM commits related to failing test runs.

  • Make sure all failures are being investigated by specific users (Who's on it? field).

  • Hold committers accountable for the effects of their changes (Related users widget).

  • Identify problematic test runs, which have not been consistently successful over time.

Back to top

Prerequisites

To benefit from all of the analysis abilities ALM Octane offers, you must:

Back to top

Test run details

The Tests tab lists all of the test runs that ran as part of this pipeline run.

For each run you can find information such as:

  • Test name, class, and package.

  • The build number in which the test ran.

  • The ID and name of the build job that ran the test.

  • The number of builds for which the test run has been failing consecutively.

    Tip: If a test has been failing consecutively, it is probably better to analyze its first failure. To navigate to that failure, right-click on the test and select Go to pipeline run where the test first failed.

  • The test run history, in the Past status column. The tallest column represents the run you are currently looking at.

  • Any tags you added to the test or test run in ALM Octane.

Add columns to the grid to see information that is not displayed by default.

You can navigate between runs to help troubleshoot problems, using the Run selector in the upper right corner. Note that the Overview tab will always show the latest run.

Filter the display to include only the test runs that interest you

Use the Filter panel on the right to quickly focus on specific issues that are of interest to you, such as failed test runs or regression problems.

You can also use the toolbar filter options as follows:

What can I do? How?

Separately address unstable, skipped, or failed test runs.

Filter by Status.
View only runs of tests that you own. Click My tests.

View test runs whose failures are related to changes committed by selected users.

Filter the test runs by Related committers.

Separate newly failing tests from ones that failed in previous pipeline runs as well.

Filter by New Failure.

Tip: Click Show filter pane to see which properties are included in the filter. This helps you understand which failures are being displayed and which are hidden.

Use clustering to analyze test run failures

Group test runs by cluster to group together failed test runs that failed for a similar reason. ALM Octane clusters failed test runs as similar failures based on test package hierarchy, stack trace, exception, the job that ran the test, and the run in which the test started failing.

  • The number of clusters gives an idea of the number of problems that need fixing. This is a better indication than the number of failed test runs.

  • When analyzing failing tests in your pipeline run, start with the large clusters. Fixing one failed run in this cluster is likely to fix all failures in the cluster and significantly reduce the number of failures in the next run.

    Note:  

    • Admins can define the number of tests to be used as a threshold for test clustering analysis using the CLUSTERING_MAX_TESTS_THRESHOLD site configuration parameter. If the number of test failures specified in this parameter is exceeded during a pipeline run, ALM Octane does not analyze clustering. For details, see CLUSTERING_MAX_TESTS_THRESHOLD.

    • In each pipeline run, clusters are named C1, C2, and so on, according to cluster size (Cluster C1 has the largest number of failed test runs). Test runs from different pipeline runs with the same cluster name, are not related to each other.

  • When assigning a failed run to someone for handling, you can assign all of the failures in the same cluster to the same person.

For more details on analyzing failures in the Test Runs tab, see Expand your test failure analysis.

Back to top

Assign someone to investigate failures

Users can assign themselves or others to investigate a build or test run failure. ALM Octane provides information about each failure to help the investigation.

If you assign someone to a pipeline step's latest build or a test's latest run, the assignment remains on subsequent failures. The assignment is cleared automatically when the build or test run passes successfully.

To assign a user to investigate a failure

In the Builds or Test Runs tab, select one or more failed builds or failed test runs and do one of the following: 

What can I do? How?
Assign yourself to the selected builds or test runs. Click I'm on it

 

Assign someone else to the selected builds or test runs.

 

Click Who's on it? and select a user. At the top of the list, ALM Octane suggests recommended users based on their recent commits, areas of expertise, or previous activity.

Select whether to send an email to that user.

If you add a comment (test run only), it's included in the email.

Clear the user from Who's on it? Click Who's on it? and then click Clear Assignment.
Indicate that you completed your investigation and are not related to any of the failures in the pipeline run.

Click Not me in the upper right corner, near the Run number.

This clears any Who's on it? assignment you had on any failed builds or test runs in this pipeline run.

Note that Not Me is available in the Builds and Tests tabs on the main Pipeline landing page, but not in the Pipeline run form.

Tip: When assigning someone to investigate a test run failure, ALM Octane lets you choose to assign the same person to all of the test runs that failed for a similar reason.

Back to top

Best practice: Involve committers in pipeline failure analysis

When a pipeline fails, you want someone to investigate as soon as possible, and get the pipeline back on track.

If your pipeline is configured to send pipeline failure notifications to all committers, you can require all committers to respond.

  • Each committer must open the Tests page, and use the information available there to determine whether their change caused the failure.

  • The CI owner should check as well, in case the failures are caused by an environment issue he or she is responsible for.

A user can respond with I'm on it for each relevant build or test run failure, or with Not me for the entire pipeline run.

Tip: If, as a user, you think you might be responsible for a failure or able to fix it, click I'm on it . You can clear the Who's on it? assignment or set Not me for the pipeline run if after further investigation you decide it's not your issue to fix after all.

Watch a video

A Day in The Life of a DevOps Developer

Back to top

Expand your test failure analysis

Select a test run failure and use the data in the right panel to investigate the failure.

Check the build context of the test run in the Preview tab

For example: 

  • Next Runs. When investigating a failure of a test run that is not from latest pipeline run, a quick glance shows the test run status from the following pipeline runs . Tooltips on each run show pipeline run number and date.

    If a subsequent run passed successfully, you probably do not need to invest more in this investigation.

    Note: This is different from the widget on the test run itself, which displays the status of past runs .

  • Build . Displays the overall status of tests in this build. This provides a build context for you, while you are looking at a specific test run. For example, if most tests in the build are failing, the issue may be at the build level, and there is no need to analyze this specific failure.

  • Cluster: The number of additional tests that failed for the same reason. Click to view all similar failed runs together and handle them in bulk.

View error data from the failure, such as error message and stack trace, in the Report tab.

In the stack trace, you may see highlighted file names. These files may be related to the failure, as they contain changes committed in this build.

If available, view more detailed reports:

  • Click Testing tool report to open an external report with more details about the failure, from UFT or StormRunner Load, for example.

  • Click Custom build report to open a build report stored on your CI server or elsewhere. This link is provided if a DevOps admin configured a template for such links on the pipeline step. For details, see Label and configure pipeline steps.

View commits related to your pipeline run and to test failures

The Commits tab in the right panel displays details about committed changes related to this failure. Commits most closely related to the failure are listed first.

Each commit includes a reason which explains why ALM Octane considers it related to the failure (for example, error in stacktrace). A tooltip on the reason indicates the degree of confidence in this association, so that the more ALM Octane is confident that the commit is related to the failure, the higher the percentage displayed in the tooltip. Commits with a higher degree of confidence are displayed higher on the list than those with lower confidence.

Tip: A dotted line around a committer indicates that the related commit is from a previous build.

The following table explains why ALM Octane would highlight a specific commit:

Change in test file before test became unstable When a test is unstable, ALM Octane shows the last commit that changed the test file before it became unstable. This commit is likely to be related to the initial failure that caused the current instability.
Error in stacktrace The commit includes changes to files listed in the current failure's stack trace.

Historical association

The commit includes changes to files listed in the stack trace of a previous failed run.

Related files

The commit includes changes to a file which has previously been linked to this test, by analyzing committers and Who's on it history.

Whenever a committer is manually assigned as on it for a test failure, a relation is built between their committed files to the test. This is then used to identify related commits for new failures.

Same application module

The commit is related to an application module that is assigned to the test.

Commits are related to application modules via the stories mentioned in the commit message, and the features that those stories belong to.

Same feature

The commit is related to a feature that is assigned to the test.

Commits are related to features via the stories mentioned in the commit message.

If this test has been failing consecutively, related commits include commits from this pipeline run and from the pipeline run in which the test started failing. For commits from a previous build, the build number is displayed.

You can filter the commits in this view: 

  • All commits (this pipeline run). All of the commits, not only the ones ALM Octane related to the selected test run failures.
  • My commits (this pipeline run). All of your commits, not only the ones related to the selected test failures.
  • Related commits (previous runs too) only commits related to the selected test run failures.

A risk icon indicates that a commit includes changes to hotspot files. For details, see Identify risky commits and features at risk.

Create a defect

If you are looking at a pipeline's latest run, you can create a defect related to one or more of the failed test runs:

Select the relevant rows in the grid and click Report Defect.

The new defect is linked to the runs you selected and contains a link to this pipeline run's Tests tab.

Add additional columns to the grid of failed test runs

For example:

Add this column to...
Pipeline links

navigate to the pipeline and to a particular run of the pipeline.

Application modules see which application modules are currently associated to the tests that failed.
Problem see whether the test has a history of unsuccessful runs. For details, see Identify problematic tests.
Who's on it? see who's investigating this failed run.

Linked defects

see defects that are linked to the failed run.

Back to top

See also: