Test Executions widget in custom dashboards
Monitor the scale of test automation activity in your organization.
This widget represents the scale of test automation activity in your organization, by tracking the number of test executions as per the configurations you have selected. You can use the Test Executions widget to forecast the future scale of testing and progress made toward goals. An upward trend indicates that the scale of automation testing is increasing.
The Test Executions widget is a collection of one or more line charts in which the X-axis represents time and the Y-axis represents the number of test executions. Each line is a different segment that you can configure to compare different projects, builds, users, etc.
In the sample above, there are two segments: Test Executions A (yellow line) and Test Executions B (blue line). Test Executions A has more number of tests than Test Executions B at any point in time. Also, the variance in the number of tests is higher in Test Executions A than in Test Executions B.
Drill down for more information
Test Observability enables you to investigate more contextual information on all dashboard widgets using the drill-down feature.
You can use the drill-down feature in the Test Executions widget to gather more insights. For example, if you see a drop in the number of test executions at any point, you can investigate the reasons for this drop.
Follow these steps to use the drill-down feature:
- Hover on any point in the Test Executions widget and click View breakdown. A project-wise breakdown of the test execution metrics for the selected date range opens up in a side pane.
- Click View tests.
This opens Tests Health with the applicable filters so that you can further investigate fluctuations in the number of test executions.
Widget configuration - Test Executions
You can configure the following options in the Test Executions widget:
-
Widget name: A suitable name to easily identify the purpose of the widget.
-
Description: An optional widget description to explain the purpose in detail. A user can view this description by hovering over an info icon on the widget and gain valuable context about the widget.
-
Chart Summary: A toggle to show or hide the chart summary, a concise banner that displays summarized information on your Test Executions widget. By default, the widget displays the total test execution count as the chart summary. You can choose to show or hide this chart summary. Chart summary is available only on widgets with a single segment.
-
Segments: Add up to five segments in the Test Executions widget using the Add segment option. These segments appear as separate line charts in the widget. Segments should be used along with filters. You can use various filters in each segment to compare different projects, builds, users, etc.
-
Filter: You can add a filter to include only the data you want in a particular segment. The parameters by which you can filter data are Projects, Unique Build Names, Users, Build Tags, Test Tags, Hooks Visibility, Host Names, Folder names, Device, OS, and Browser.
You can also import filters from other widgets to avoid duplicate efforts.
Sample use cases
You can use the Test Executions widget to track and compare the number of test executions in different sections of your testing organization. Here are a few sample use cases to get you started:
Analyze test executions in different modules or teams
You can configure separate segments in the Test Executions widget for different modules or teams in your test suites. To do this, you need to use segments in combination with the following filters to identify modules and teams:
- Unique build names filter to identify build names that belong to a particular module or team.
- Users filter to differentiate between team members who triggered the build.
- Folder names filter to identify modules based on folders in your test repository.
- Build tags and Test Tags that represent team or module information.
Consider the following example in which the number of test executions in three modules is tracked.
Here, the three line charts represent Module A (purple line), Module B (blue line), and Module C (yellow line) in a test suite. Such a graph can quickly tell you that Module A has a relatively high number of test executions over time, Module B tends to have the least number of test executions, and Module C has a moderate number. If the modules are relatively similar in size and complexity, using the insights from this widget, you can focus on Module B and find out the reasons for the low number of test executions using the drill-down feature. In many cases, you will be able to apply best practices followed by top-performing teams to improve the scale of test automation activity by other teams.
To create the above widget, in the following sample configuration, different Folder names filters are configured on each of the three segments that define Module A, Module B, and Module C.
Analyze test executions on different platforms
You can measure the number of test executions across multiple devices, OS, and browser combinations using the Test Executions widget. This can be achieved by configuring separate segments for each of the OS-device-browser combinations that you want to track.
In the following example, test executions run on three different browsers are compared.
Here, the three line charts represent the number of test executions run on Browser A (purple line), Browser B (yellow line), and Browser C (blue line). This graph informs you that the number of test executions run on Browser C is much higher than that of Browsers A or B. Also, it tells that the tests run on Browser B are much less in number compared to the other browsers. You can analyze deeper using the drill-down feature. Using these insights you will be able to concentrate on improving the number of tests executed on Browser B.
We're sorry to hear that. Please share your feedback so we can do better
Contact our Support team for immediate help while we work on improving our docs.
We're continuously improving our docs. We'd love to know what you liked
We're sorry to hear that. Please share your feedback so we can do better
Contact our Support team for immediate help while we work on improving our docs.
We're continuously improving our docs. We'd love to know what you liked
Thank you for your valuable feedback!