Insight cards

Some widgets provide insight cards that help you analyze trends, identify bottlenecks, find correlations, detect anomalies, and better understand what you need to improve in your development cycle.

Overview

Insight cards are pop-up windows associated with dashboard widgets that are marked with the Insight icon.

Insight cards provide a summary view in the form of a graph, with a linkable list of items whose values were included in the widget.

Using the information from insight cards, you can:

  • Track the feature highest cycle times to identify bottlenecks in the development process as well as wasted time. This helps you understand how to improve your cycle and plan future releases.
  • Compare the development cycle time between releases and milestones to understand whether you are improving, and if not, then why.
  • Determine the item time to market to assess the effectiveness of your development process.
  • Analyze the trend of items' cycle time/resolution speed vs. quality to understand how they correlate with one another.
  • Identify areas in your product with high quality risk to understand how to increase the efficiency of your testing strategy and mitigate the risk.
  • Determine whether investment in automation improves key indicators, such as speed, cost, and efficiency.

You can create multiple widgets to check cycle time and trends for different releases or different item scopes.

To open an insight card:

  1. Add a widget to your dashboard. Widgets with insights contain the Insight icon in their description.

    For details on adding widgets to the dashboard, see Set up a dashboard.

  2. Configure the widget settings.

    For details, see Configure widget settings.

  3. Select the widget. The insight pane opens.

    If you close the insight pane, you can reopen it by selecting the widget again.

  4. Drill down by:

    • Selecting a filter, for example, a release or phase.
    • Clicking one of the linked items in the list.
    • Hovering over the desired part of the graph to view additional information.

To export your top 100 insights:

  1. From the insight pane, click Show more...
  2. In the toolbar, click the Export to Excel button .

Back to top

Highest cycle times per release insights

This insight card opens from the Feature cycle time across releases widget. It displays the top 100 features with the highest cycle time, per release.

To drill down to a specific release, select the necessary release from the drop-down menu.

To drill down to feature details, click the feature link. This can help you learn why the feature cycle time was excessive.

Back to top

Highest cycle times per release by phase insights

This insight card opens from the Feature cycle time by phase across releases widget. It displays a list of features with the highest cycle time, per release and phase.

To drill down and check the cycle time for a specific phase in a release, select the release and the phase from the drop-down menu.

To drill down to feature details, click the feature link. This can help you learn why the feature cycle time was excessive.

Back to top

Highest cycle times insights

This insight card opens from the Feature cycle time by phase widget. It displays a list of features with the highest cycle time.

To drill down and check the cycle time for a specific phase, select the phase from the drop-down menu.

To drill down to feature details, click the feature link. This can help you learn why the feature cycle time was excessive.

Back to top

Defects per feature vs. cycle time insights

This insight card opens from the Feature cycle time across releases widget. It displays a trend of cycle time in comparison with a trend of the average number of defects per feature.

Using this comparison, you can determine the quality of features, as well as the correlation between speed and quality. For example, if you see an improved cycle time for features, but it also caused a decrease in the quality, you need to further check your processes. Make sure the decrease in quality is not correlated with the improved features cycle time.

To make your insights more relevant, you can create multiple widgets of the same type, filtering each one differently. For example, features with many story points or user stories, naturally have longer cycle times. In this case, you can create one widget to show only features with 50 or less story points, and another one showing features with 51 or more story points. To do so, open the widget's Data page and define an additional filter for story points. For details, see Configure widget settings.

Back to top

Highest resolution times insights

This insight card opens from the Defect resolution time widgets. It displays a list of defects with the highest resolution times.

To drill down and check the resolution time for a specific phase, select the phase from the drop down menu.

To drill down to defect details, click a defect link. This can help you learn why the resolution time was excessive.

Back to top

Resolution time vs. reopen rate insights

This insight card opens from the Defect resolution time across releases widget. It displays the ratio of average defect resolution time across releases vs. the trend of defects reopening. The reopen rate refers to defects that were moved from the Fixed to the Opened metaphase or from the Closed to the Opened metaphase.

Tip: You can see the number of reopens by in the Reopens field. If it is not displayed, use the Customize fields button to add it.

The insight trend enables you to check whether the reopen rate correlates with the defect resolution time. For example:

  • A reduction in defect resolution may not always be beneficial. For example, the quality of the fix may be compromised due to a high reopen rate.
  • An increase in defect resolution time may not always indicate a problem. For example, if the reopening ratio has a decreasing trend, this can indicate that more time is being spent on providing higher quality fixes.

Hover over a point in the graph to see the value at a given point.

Back to top

Automation ROI insights

To see insights about Automation ROI (Return on Investment), select the Automation ROI widget in the dashboard.

The Automation instability insight provides an indication of whether your automation is stable, and where it can be modified to improve the ROI. If the automation is unstable, it can dramatically lower the automation ROI.

Note: The insight shows the trends of oscillating and unstable tests in the selected pipelines and releases. It is not affected however, by the test automation or application module filters.

This insight describes a trend showing the failures identified as ‘unstable’, as a percentage of all runs in the selected area. The failures are caused by unstable tests. Hover over the graph to see the data for a specific point in time.

For more information about ROI tools, see Automation ROI.

ROI metrics cards

Beneath the Automation instability graph, the insight pane displays additional informational cards that let you drill down further to information displayed in the graph.

  • Defect cost. The cost of defects, determined by comparing defects detected early by automation vs. defects submitted after manual runs. Defects detected by automation refer to defects reported from automated runs that ran in the selected time frame. Choose a release from the dropdown to show the information relevant to you.

    This section shows a summary of:

    • Defects detected by automation
    • Test failures due to code issues. This value reflects code issues for which defects were not opened. Broken code issues for which defects are not submitted, may significantly affect your cycle time. These issues are automatically detected and added into the calculation of the defect cost. This field requires ALM Octane version 15.1.20 or higher.
    • Defects detected by manual runs
    • Defects with high detection time. (percentage)
  • Development cycle time—feature CT. The speed of development as determined by feature cycle time. Select a release or sprint to show the information that interests you.
  • Escaped defects. Escaped defects are defects that were not detected by your automation suite nor by your manual testing activities. Investing more in automation aims to decrease defects leaks and improve quality. By investing more in automation, you expect to improve the trend and lower the number of escaped defects. This field is only visible when comparing multiple releases—not sprints.
  • Regression cost—Manual test executions. The number of manual test executions for the selected application module, release, or sprint. Choose a release or sprint from the dropdown to show the information that interests you.

Back to top

Quality risk insights

Within the Quality module, use the Risk page to locate areas where functionality is not sufficiently tested.

Quality risk insights

The quality risk insights present scores to help you identify the problematic areas. The risk scores are based on the selected timeline of the selected release.

The insights pane lists the child applications of the selected application module, with the top ten highest risk scores.

A risk baseline is calculated from within all of your application module nodes. The problematic nodes are then compared with the baseline, and a risk index is calculated.

Tip: To locate the actual node in the Application Modules tree, copy its title from the Top 10 list, and search for it in the left pane using the Search button .

The quality risk score is calculated automatically every few hours based on the data shown in the middle pane, such as your application modules' code changes, risky commits, and test coverage. The higher is the score, the greater is the risk.

The pane only displays critical (red), high (orange), and medium (yellow) risk items. If only low (green) risk areas are found, they are not displayed.

The Reduce Risk button helps reduce the quality risk of the selected module by recommending a set of tests to run.

For details, see Quality risk analytics.

Risk-based testing insights

This insight card is available from the recommended tests grid. The insights provide the main indicators that help you understand why the tests are recommended. For example, the analytics recommends and prioritizes tests that should provide a good coverage of a selected area and reduce its quality risk.

In the grid, select a test to view the following data in the insights pane:

  • The risk score for a selected application module the test covers

    If the test covers more than one application module at risk, this is reflected in an additional insight.

  • The test's failure ratio

Analyze the above KPIs to decide whether to run the recommended test.

Note:  

  • Currently, tests recommendation is based on the risk score for an application module a test covers, the test's failure ratio, and its last execution time. Additional KPIs will be provided in the insights in future releases.
  • When exporting the recommended tests to Excel, the test insights are provided in a separate Insights column.

For details, see Quality risk analytics.

Back to top

See also: