Problematic tests are tests that are not consistently running successfully. They may be repeatedly or randomly failing, continuously being skipped or suddenly failing after having been successful. ALM Octane highlights problematic tests as such behavior indicates a situation you might want to investigate:
An automated test run's Problem field indicates the type of problem this test is having. For example, Continuously failing, Oscillating, Continuously skipped, and more.
The Problematic tests widget shows a breakdown of test runs that have not been consistently successful, according to the type of problem. Click on the column of a specific problem type to view the relevant test runs.
The widget is available in the dashboard and in a pipeline's overview.
The following table shows the test run result patterns that ALM Octane labels as problematic:
The last 8 runs of the test failed.
In the last 8 runs of the test, its Pass/Fail status changed 4 times or more.
In other words, there were at least 4 times in which a failed run was followed by a successful run or vice versa.
A test that previously passed at least twice is now failing:
Looking at the last 4 or more runs, the series ends with at least 2 passed runs followed by one failed run.
|Continuously skipped||The test was skipped in the last 8 runs of the pipeline.|
A test that is failing randomly:
In the last 50 runs of the test, there were at least 5 times where the test passed, failed once, and then passed again.
If the test run results match more than one problematic pattern, the Problem field contains multiple problem types.
Throughout the patterns, the test may have been skipped in some of the pipeline runs, but not in the most recent pipeline run.
Example: P=Passed, F= Failed, S = Skipped.
FSFFFSFFFF: Continuously failing
FFFFFFFFFFS: Not problematic (ends with skipped)
PFPPSPFPP: Oscillating (4 changes)
PFPPFPPSS: Not problematic (ends with skipped)
SSSSSSSS: Continuously skipped
SSFSSSSS: Not problematic (not all 8 skipped)