Building a test strategy is tricky. There are many moving parts, and every organization is different. Where do you start? What do you prioritize? In this webinar, Benjamin Bischoff, Test Automation Engineer at Trivago shares how they created and implemented an end-to-end test strategy for their main web application, and how it’s evolved ever since.
You can see the different kinds of end-to-end tests they run at various stages of their CI pipeline and their benefits. Benjamin also shares implementation challenges, and the many factors considered while shaping the strategy such as the technology, test environments, processes, development principles, etc.
Along the way, Benjamin was asked questions about the many whys and wherefores involved in the process. Here's a roundup of his answers:
If test automation is a product, who is the product owner, and who are the clients?
Test automation should be treated as a product—with the same care as the application under test. Like application code, test automation needs to be maintained, refactored, tested, and documented thoroughly. In our setup, the main client as well as the product owner is the QA team, as they exactly know which features they require for doing their job, more efficiently.
What are your criteria for defining a test as flaky, and what kind of data is shown in the flakiness detection jobs?
Our tests are deterministic so they need to show the same result every time they run. If this is not the case and is caused by factors other than changes in the environment or external dependencies, we consider them flaky.
In our flakiness detection jobs that run overnight, we collect data and visual cues from the successes and failures (run time, failure category, stack trace, performed and skipped steps, screenshots, video recordings, etc.) in each test scenario, so we can create a comprehensible dashboard from it.
Have you considered using a quarantine approach for dealing with flaky or failing tests?
We are, in fact, following such a process. If we encounter flaky or consistently failing tests, we skip them if they cannot be fixed right away. These are run and analyzed individually so we can determine the exact cause of failures, and fix or delete them as soon as possible.
How did you centralize exploratory and automated testing within the pipelines? Does QA have their own pipeline, separate from continuous integration?
Automation is a tool that enables better exploration, so we don't strictly separate the two. Along with the automated tests running in our common CI pipelines, QA has a separate pipeline that can be used to trigger and analyze tests on different browsers and devices. This can be used for test qualification, flakiness investigation or testing feature branches that are currently in development.
How do you do the test qualifications?
Whenever a new automated test scenario is created, we run it 100 times in a row via our QA pipeline. This is usually done against our staging environment which is very close to production. This way, we also ensure that it is properly runnable within our main CI pipeline later.
How do you identify which end-to-end paths to automate?
This is done mainly by the QA team as they are aware of the main features that are mostly used by our customers and are business-critical. Another criterion to choose a path is how likely it is to be affected by other features.
How do you avoid duplicated end-to-end tests from other test layers (unit, integration, etc.)?
Communication! Since different kinds of tests are rather separated from each other, the best way to know what is being tested in other layers is by talking to the teams that maintain them. They can either tell you exactly what is being tested or at least point to the respective code repositories or documentation so that you can check it.
You mentioned synthetic monitoring is faster and more accurate than other monitoring. What are the reasons for this?
The main reason why synthetic monitoring is more accurate than monitoring that is focussed on the availability of services is that synthetic monitoring checks that the application under test is usable. This is done by running a complete user journey against live instances instead of just pinging different parts of the infrastructure. When I say synthetic monitoring is faster, I do not necessarily mean the speed of the test runs, but the feedback speed.
What would be your suggestion to start with the automation for a project/system that already exists but has no coverage at all?
The ideal way is to start by talking to the involved parties such as the project owners, developers and QA to find out what the first step should be. As far as possible, you should make sure that everyone is on the same page before you do anything. Starting a new automation project can be overwhelming so focusing on one easier-to-reach goal first helps you get started.
What was the difference between the QA team and the test automation team prior to merging them?
The QA team's main tasks included exploratory testing and application releases. Later, they were also responsible for defining and writing test cases. The test automation team was mainly responsible for providing the frameworks, pipelines and infrastructure for running those tests.
What changed in your day to day tasks when the automation team was integrated into the QA team?
My day to day tasks are basically the same as before. However, I work much closer together with QA in terms of improving the technologies behind our tests. Also, I learned a lot about exploration, so I started helping with this as well. You could say that our skillsets started overlapping since the merge.
It would seem there is a line beyond which the effort required for testing exceeds the actual benefits. Where do you think that line is?
This is very true for everything that creates more problems than it solves. For me, testing is an absolutely essential part of software development to ensure that a product is stable—it is easier to maintain, understand and refactor. For a test automation project, you typically need a lot of patience before everything falls into place but in the end, if treated carefully, it is worth it.
How do you empower responsibility for testing quality across your organization? How much is it top-down, how much bottom-up?
The answer here, again, is communication. Quality cannot be enforced from top to bottom. Rather, it is something that everyone involved in the creation of a product must internalize and understand the benefits of.