Analyze test failures

In a specific pipeline run, use the Test Runs tab to learn more about automated tests that were part of the run. The following section describes how to use the tests analysis tools to pinpoint the source of test failures.

Overview

You can see how many of your tests ran successfully, how many were skipped or unstable, and how many failed in the Pipelines > Analysis > Test Runs tab.

You can learn more about the failures, what may have caused them, and what application modules they affect.

After analyzing failures, you might decide to edit an automated test, roll back a committed change, create a defect for an area in your application, or even extend you project's timeline or reduce the planned content.

In the Test Runs tab, you can do the following and more:

  • Filter the display to include only the test runs that interest you.

  • Find SCM commits related to failing test runs.

  • Make sure all failures are being investigated by specific users (Assigned to field).

  • Hold committers accountable for the effects of their changes (Related users widget).

  • Identify problematic test runs, which have not been consistently successful over time.

Back to top

Prerequisites

To benefit from all of the available analysis capabilities, do the following:

Back to top

Test run details

The Test Runs tab lists all of the test runs that ran as part of this pipeline run.

To view a list of the tests in the Test Runs tab:

  1. Open the Pipelines module and select the Analysis tab.

  2. Select a pipeline in the summary bar. Use the directional arrows to scroll through the tabs.
  3. Hover over the Run box to select a run from the dropdown, or use the directional arrows to move between the runs.
  4. Click the Test Runs tab to see the tests in the selected pipeline run.

Analyze the test run details

For each run you can find information such as:

  • Test name, class, and package.

  • The build number in which the test ran.

  • The ID and name of the build job that ran the test.

  • Failure age: The number of builds for which the test run has been failing consecutively.

    Tip: If a test has been failing consecutively, it is probably better to analyze its first failure. To go to that failure, right-click the test and select Go to pipeline run where the test first failed.

  • The test run history, in the Past status column

    The dot beneath the run indicates the run being viewed.

    Hover over the test run history in the Past status column to shows the 5 latest runs of this test, across all pipelines.

    To view all the latest runs of a test across pipelines:

    1. Select a test, and click the Show preview button .

    2. In the Preview sidebar, click Show latest runs on all pipelines
  • Any tags you added to the test or test run.

Add columns to the grid to see information that is not displayed by default.

You can move between runs to help troubleshoot problems, using the Run selector. Note that the Overview tab always displays the latest run.

Apply a filter

Filter the display to include only the test runs that interest you.

  • Click the Assigned to me button to filter tests already assigned to you.

  • Click the Quick filter button to focus on specific issues that are of interest to you, such as failed test runs or regression problems.

  • Click the Advanced filter button in the toolbar to perform the following actions.

    Action Description

    Separately address unstable, skipped, or failed test runs.

    Filter by Status.
    View only runs of tests that you own. Click My tests.

    View test runs whose failures are related to changes committed by selected users.

    Filter the test runs by Related committers.

    Separate newly failing tests from ones that failed in previous pipeline runs.

    Filter by New Failure.

Tip: Click the Advanced filter button to see which properties are included in the filter. This helps you understand which failures are being displayed and which are hidden.

Use clustering to analyze test run failures

Group test runs by cluster to group together failed test runs that failed for a similar reason. Failed test runs are clustered as similar failures based on test package hierarchy, stack trace, exception, the job that ran the test, and the run in which the test started failing.

To see your tests grouped by clusters, in the Test Runs tab click Show clusters.

  • The number of clusters gives an idea of the number of problems that need fixing. This is a better indication than the number of failed test runs.

  • When analyzing failing tests in your pipeline run, start with the large clusters. Fixing one failed run in this cluster is likely to fix all failures in the cluster and significantly reduce the number of failures in the next run.

    Note:  

    • Admins can define the number of tests to be used as a threshold for test clustering analysis using the CLUSTERING_MAX_TESTS_THRESHOLD site configuration parameter. If the number of test failures specified in this parameter is exceeded during a pipeline run, clustering is not analyzed. For details, see Configuration parameters.

    • In each pipeline run, clusters are named C1, C2, ... Cn, starting with the cluster with the highest number of failed test runs. Test runs from different pipeline runs with the same cluster name are not related to each other.

  • When assigning a failed run to someone for handling, you can assign all of the failures in the same cluster to the same person.

To clear this grouping, click the Group By button and clear all groups.

For more details on analyzing failures in the Test Runs tab, see Expand your test failure analysis.

Assign tests to application modules

You can assign automated tests to application modules to view the test results in context. You can then use these results to analyze the progress and quality of your release and product.

Select one or more tests in the Test Runs tab, and click Assign to Application Module. This assigns the tests themselves, and not the test run.

Historic automated runs store the owner and application module information from the time of the initial injection. Assigning a different owner to an existing test will only affect new run instances of this test, but not former runs.

For details on working with application modules, see Quality.

Back to top

Assign someone to investigate failures

Users can assign themselves or others to investigate a build or test run failure. Each failure is provided with information to help the investigation.

If you assign someone to a pipeline step's latest build or a test's latest run, the assignment remains on subsequent failures. The assignment is cleared automatically when the build or test run passes successfully.

To assign a user to investigate a failure

In the Builds or Test Runs tab, select one or more failed builds or failed test runs and do one of the following.

Action Description
Assign yourself to the selected builds or test runs.

Click the I'm on it button .

Assign someone else to the selected builds or test runs.

 

Click the Assign to button and select a user. At the top of the list, recommended users are suggested based on their recent commits, areas of expertise, or previous activity.

Select whether to send an email to that user.

If you add a comment (test run only), it's included in the email.

Clear a user from assignment Click Assigned to and then click Clear Assignment.
Indicate that you completed your investigation and are not related to any of the failures in the pipeline run.

Click Not me.

This clears any Assigned to assignment you had on any failed builds or test runs in this pipeline run.

Note that Not Me is available in the Builds and Test Runs tabs on the main Pipeline landing page, but not in the Pipeline run form.

Tip: When assigning someone to investigate a test run failure, you can choose to assign the same person to all of the test runs that failed for a similar reason.

Back to top

Best practice: Involve committers in pipeline failure analysis

When a pipeline fails, you want someone to investigate as soon as possible, and get the pipeline back on track.

If your pipeline is configured to send pipeline failure notifications to all committers, you can require all committers to respond.

  • Each committer must open the Tests page, and use the information available there to determine whether their change caused the failure.

  • The CI owner should check as well, in case the failures are caused by an environment issue he or she is responsible for.

A user can respond with I'm on it for each relevant build or test run failure, or with Not me for the entire pipeline run.

Tip: If, as a user, you think you might be responsible for a failure or able to fix it, click the I'm on it button . You can clear the Assigned to assignment or set Not me for the pipeline run if after further investigation you decide it's not your issue to fix.

Back to top

Expand your test failure analysis

Select a test run and use the data in the sidebar to investigate the failure.

Check the build context of the run

  1. Open the Pipelines > Analysis > Test Runs tab, and select a test run.

  2. Click the Show preview button .

  3. In the Preview sidebar, check the build context of the test run. For example, you can view the following details.

    Area Description
    Run History

    When investigating a failure of a test run that is not from latest pipeline run, a quick glance shows the test run status from other pipeline runs . Hover over a bar to show information about the pipeline, such as the run number, duration, and date.

    If a subsequent run passed successfully, you probably do not need to invest more in this investigation.

    Note: This is different from the widget on the test run itself, which displays the status of past runs .

    Build Displays the overall status of tests in this build. This provides a build context for you, while you are looking at a specific test run. For example, if most tests in the build are failing, the issue may be at the build level, and there is no need to analyze this specific failure.
    Cluster The number of additional tests that failed for the same reason. Click to view all similar failed runs together and handle them in bulk.

View error data

  1. Open the Pipelines > Analysis > Test Runs tab, and select a test run.

  2. Click the Show report button to view error data from the failure, such as error message and stack trace.

    In the stack trace, you may see highlighted file names. These files may be related to the failure, as they contain changes committed in this build.

  3. View more detailed reports (if available).

    • Click Testing tool report to open an external report with more details about the failure, from OpenText Functional Testing or OpenText Core Performance Engineering, for example.

    • Click Open build report or Open test run report to open a report stored on your CI server or elsewhere. This link is provided if a template was configured by an admin for such links on the pipeline step. For details, see Label pipeline steps by job type.

View related users

  1. Open the Pipelines > Analysis > Test Runs tab, and select a test run.

  2. Click the Related Users button to help you decide which users are most relevant to handle issues in a selected pipeline run or test failure, depending on the context.

    If you have not selected a test, you see all the commits which are related to the pipeline run. When you select a test, you see two areas:

    • The upper pane provides details of users who are currently assigned to handle the test failure, or who are recommended by to investigating the issue. For example, a user may have clicked I'm on it on a failed test, handled a similar failure in a previous run, or they may have changed a file that was found in the run’s stack trace.

      You can clear assignments or assign recommended users by clicking the corresponding buttons. Recommendations are continually improved based on your feedback.

    • The lower pane provides details of other related users and their relevant commits grouped by users. In each instance, the reason is displayed why a user or a commit is related to the current pipeline run or test. The commits are listed in the order of their relevance for the selected test.

      You can filter this information based on file name, commit message, or user name to help you locate relevant information easily.

      If a test has been failing consecutively, related commits include commits from this pipeline run and from the pipeline run in which the test started failing. For commits from a previous build, the build number is displayed.

    A risk icon indicates that a commit includes changes to hotspot files. For details, see Identify hotspots in your code.

    Tip: You can automatically assign recommended users after a certain number of consecutive failures. This can be done per pipeline step, and requires related permissions.

    1. In the Pipelines > Analysis tab, click a pipeline. Click the More button and select View.
    2. Select the pipeline's Topology tab.
    3. Click the Configuration button on a pipeline step, and open the Auto-assignment tab.

    4. Select the checkbox to enable auto-assignment, and specify the number of failures required before a user is automatically assigned.

      When a user is automatically assigned to a failed run, they receive an email with the relevant details.

      For details, see Explore a pipeline's graphical representation.

Track test behavior across all pipelines

When you only look at one specific execution context, it can be difficult to analyze failures. Is a test unstable? Is it a regression in this release only? Is it failing in a particular pipeline configuration?

Test failure analysis is typically done in the context of a specific pipeline. However, you may be running similar tests in multiple parallel pipelines. For example, you might have one pipeline for each environment configuration, or a pipeline for each release, or a quick pipeline for sanity testing and longer pipelines with additional tests.

To see the latest runs of a particular test across all pipelines, select the test and click Show latest runs on all pipelines in the Preview sidebar . This gives you a more comprehensive view of where the test is failing, and where you need to focus attention when analyzing test failures.

Find tests that failed continuously in all pipeline runs

Suppose you committed a change yesterday and the pipeline run status was unstable, and you want to see if you are responsible for the failure.

In the Test Runs tab, add the column Failing in all future runs, which shows if a test has failed continuously in all pipeline runs after your push. If a test succeeded once since the pipeline run when you committed the change, you’re probably not responsible for this failure. If it has continuously failed, you know you need to check your commit for problems.

The value of the Failing in all future runs field is calculated relative to the pipeline run that you are currently looking at. You can filter test runs by this field, to reduce the number of relevant runs that should be investigated. We recommend using this together with the New failures filter, to greatly reduce the number of tests you need to investigate.

Note: This field requires that the default Elasticsearch scripting settings be enabled.

Create a defect

If you are looking at a pipeline's latest run, you can create a defect related to one or more of the failed test runs.

Select the relevant rows in the grid and click the Report Defect button . The new defect is linked to the runs you selected and contains a link to this pipeline run's Test Runs tab.

Investigate failed tests

Add additional columns to the grid that can provide additional information on failed test runs.

Column Use
Pipeline links

Go to the pipeline and to a particular run of the pipeline.

Application modules See which application modules are currently associated to the tests that failed.
Problem See whether the test has a history of unsuccessful runs. For details, see Identify problematic tests.
Assigned to See who's investigating this failed run.

Linked defects

See defects that are linked to the failed run.

Back to top

See also: