Analyze automated test run results

After automated tests run, analyze the results at the pipeline, test, or test run level.

Examine failures on specific test runs

After you run a test, you can view the results to see if it succeeded or failed, and why it failed. The following is a list of the primary areas to examine:

  • The root cause of the failure.

  • The pipeline run or build in which a specific run failed. For a pipeline-level analysis, use a pipeline run's Test Runs tab. For details, see Test run details.

  • Previous run history. This includes seeing when, how frequently, and why the test failed. It also helps you determine whether the failures were a result of one problem, or different problems.

  • Error messages and types.
  • External tool reports. Examine the external tool report to learn about performance and failures. For example, for LoadRunner Cloud tests, in the Tests tab, click the Testing tool report button to open the detailed report in LoadRunner Cloud.

To view details for a specific failed test:

  1. In the Backlog, Team Backlog, or Quality module, select the Tests tab.

    Tip: Select the Latest pipeline run tag to exclude automated tests skipped or removed from the pipeline's latest run. The display includes manual tests, executable automated tests, and automated tests that ran as part of a pipeline's latest run.

    If a new pipeline run is in progress, some tests that ran on the previous pipeline run are still included.

  2. Click the ID of a specific automated test.

  3. Select the Runs tab. This tab includes one test run entity for each set of environment settings on which the test ran. Each entity contains detailed information about the last time the test ran with these environment settings and basic information about the previous runs.

    Tip: In the test run grid, add the Problem column to see whether the test has a history of unsuccessful runs. For details, see Identify problematic tests.

  4. Click the ID of a specific failed test run.

  5. In the Details tab:

    1. Hover over Build status to view build failures and details about the build in which this specific run failed, including when the build failure occurred, and the number of tests that passed, failed, or need attention. If at least one test fails, the build is marked as Failed.

    2. Check Assigned to to find out who is investigating this failed test.

      If you assign someone to investigate the test, this assignment remains on subsequent failed runs of the test. The assignment is cleared when the test passes successfully.

      Tip: To have an email notification sent to the person you assign, set Assign to in the Tests tab of a pipeline run. For details, see Expand your test failure analysis.

    3. Click the Testing tool report button to go to the report created by the testing tool that ran the test, when applicable.

    4. Click the Open Build Report button or Open Test Run Report button to open a report stored on your CI server or elsewhere, when applicable.

      This link is provided if a template was configured for such links on the pipeline step. For details, see Label pipeline steps by job type.

    5. Click the Pipeline links link to go to the pipeline and to a particular run of the pipeline (marked by #).

      Tip: To add the Pipeline links column to the grid, use the Choose columns button .

  6. In the Error Info section, view the Error message, Error type, and Error details. The Error Info fields are only populated for failed test runs. If the error details are truncated in the fields, click Show all details to display the error details in full view in a new tab.

    When a test fails, the failure cause (stack trace) is sent to the server. Users can immediately see the reason.

  7. In the Details tab toolbar, click Show in <source> to view the test results in the CI server.

  8. In the Previous Runs tab:

    1. View a list of previous test runs and related build failure and error information, including message, exception, and stack trace.

    2. Filter the run history per specified failure or error criteria. For example, you can filter by run status to view only failed runs so you can immediately identify and resolve them. You can also filter by time duration, build status, and substrings in error messages, exceptions, and stack traces.

    3. To view details of a previous test run that failed, select a run with error status, and click Open External Report, Open in origin, or Open in new tab to go to the report in the system that produced this run, when applicable.

    4. Click the link in the Pipeline links column to go to the pipeline run and pipeline.

  9. To create a defect for a failed run, click the Report defect button .

    Several fields in the new defect form are automatically populated with values from the run and test, such as Application module, Environment tags, Milestone, and Sprint. For details, see Auto-populated fields.

    To add the test run's stack trace and failure details to the defect's Description field, in the Report Defect dialog box, click the Copy failure details to description button .

Back to top

Examine overall automation test results

In addition to examining a single test's run results, you can view overall status for all the automation test results.

To analyze test results for multiple automation test runs:

  1. Make sure that the automation tests and run results have been discovered and created. For details, see Add automated tests.

  2. Open the run results.

    Area Details
    Dashboard or Overview tabs of the Backlog and Quality modules

    Use any Dashboard widget designed to assess quality through test run results.

    • On the Scope tab of these graphs, make sure that you add a filter using the Subtype attribute with a value of Automated Run.

    • Add filters to view automated tests from a specific Pipeline, or a specific Package, Component, or Class.

    Pipelines module

    The main page of the Pipelines module displays per-pipeline information about all the automated tests included in that pipeline. This includes information about the application modules with failed tests or problematic tests in recent runs.

    Use this information to isolate the automated test run per pipeline, and view these runs as part of the product build.

Back to top

See also: