Flakiness widget in custom dashboards
Monitor flakiness in your test suites.
This widget helps you understand the trend of flaky tests in your test suites over time as per the conditions set in the Flaky
Smart Tag. A higher percentage of flaky tests or an upward trend indicates quality issues in your automation tests.
The flakiness widget is a collection of one or more line charts in which the X-axis represents time and the Y-axis represents the number of flaky tests. Each line is a different segment that you can configure to compare different projects, builds, users, etc.
In the sample above, there are two segments: Flakiness A (blue line) and Flakiness B (yellow line). The value of Flakiness A increases from 4.62% on 19th November to 5.36% on 20th November. Similarly, the value of Flakiness B increases from 0% to 3.23%. This spike in flakiness on 20th November warrants a deeper audit. You can also see that both lines drop after 20th November to lower values. The insights from such an analysis could help reduce the number of flaky tests in your test suites in the future.
Drill down for more information
Test Observability enables you to investigate more contextual information on all dashboard widgets using the drill-down feature.
You can use the drill-down feature in the Flakiness widget to analyze more details on the reasons for flakiness. For example, if you see a spike in flakiness at any point, you can investigate why the spike occured.
Follow these steps to use the drill-down feature:
- Hover on any point in the Flakiness widget and click View breakdown. A project-wise breakdown of the flakiness metrics for the selected date range opens up in a side pane.
- Click View tests to get to the tests that contribute to the flakiness count.
This opens Tests Health in a new tab with the applicable filters. On Tests Health, you can view the individual flaky tests so that you can further investigate what caused the flakiness.
Widget configuration - Flakiness
You can configure the following options in the Flakiness widget:
-
Widget name: A suitable name to easily identify the purpose of the widget.
-
Description: An optional widget description to explain the purpose in detail. A user can view this description by hovering over an info icon on the widget and gain valuable context about the widget.
-
Chart Summary: A toggle to show or hide the chart summary, a concise banner that displays summarized information on your flakiness widget. In the flakiness widget, you can choose Unique Tests Impacted or Average Flakiness as the chart summary. By default, the flakiness widget displays Unique Tests Impacted as the chart summary. You can also choose to show or hide this chart summary. Chart summary is available only on widgets with a single segment.
-
Number/Percentage: The Number/Percentage setting lets you choose the unit of measurement on the Y-axis: either the total number of flaky tests or the percentage of flaky tests against total test runs. By default, it shows the absolute number of flaky test runs. However, if your daily number of test runs varies, you can switch to “Percentage” for a more precise analysis.
-
Segments: Add up to five segments in the Flakiness widget using the Add segment option. These segments appear as separate line charts in the widget. Segments should be used along with filters. You can use various filters in each segment to compare different projects, builds, users, etc.
-
Filter: You can add a filter to include only the data you want in a particular segment. The parameters by which you can filter data are Projects, Unique Build Names, Users, Build Tags, Test Tags, Hooks Visibility, Host Names, Folder names, Device, OS, and Browser.
You can also import filters from other widgets to avoid duplicate efforts.
Sample use cases
You can use the flakiness widget to track and compare the flakiness of several aspects of your testing organization. Here are a few sample use cases to get you started:
Analyze module-wise and team-wise flakiness
You can configure separate segments for different modules or teams in your test suites. You can use segments in combination with the following filters to identify modules and teams:
- Unique build names filter to identify build names that belong to a particular module or team.
- Users filter to differentiate between team members who triggered the build.
- Folder names filter to identify modules based on folders in your test repository.
- Build tags and Test Tags that represent team or module information.
Consider the following example in which the flakiness of tests in three modules is compared.
Here, the three line charts represent Module A (purple line), Module B (blue line), and Module C (yellow line) in a test suite. Such a graph can quickly tell you that Module A has relatively low flakiness over time, Module B tends to display the most flakiness, and Module C displays a moderate frequency of flakiness. Using this insight, you can focus on Module B and find out the reasons for the persistently high number of flaky tests using the drill-down feature. In many cases, you will be able to apply best practices followed by top-performing teams to reduce flakiness in the tests handled by other teams.
To create the above widget, in the following sample configuration, different Folder names filters are configured on each of the three segments that define Module A, Module B, and Module C.
Analyze flakiness on different platforms
You can measure the flakiness across multiple devices, OS, and browser combinations using the flakiness widget. This can be achieved by configuring separate segments for each of the OS-device-browser combinations that you want to track.
In the following example, flakiness in tests run on three different browsers is compared.
Here, the three line charts represent the flakiness in tests run on Browser A (purple line), Browser B (yellow line), and Browser C (blue line). This graph informs you that the flakiness of tests run on Browser A varies more than that of Browser B or C. Also, it tells that the tests run on Browser C display lesser flakiness than that of other browsers. You can analyze deeper using the drill-down feature. Using these insights you will be able to concentrate on reducing the flakiness in tests run on Browser A and B.
We're sorry to hear that. Please share your feedback so we can do better
Contact our Support team for immediate help while we work on improving our docs.
We're continuously improving our docs. We'd love to know what you liked
We're sorry to hear that. Please share your feedback so we can do better
Contact our Support team for immediate help while we work on improving our docs.
We're continuously improving our docs. We'd love to know what you liked
Thank you for your valuable feedback!