A good automated test suite gives actionable feedback, helps fix bugs faster, and enables rapid delivery of software. A good automated test suite on the cloud does it more cost-efficiently.
But more often than not, functional testing on the cloud ends up delaying feedback, increasing flakiness, and providing zero visibility into the failures—results which are completely at odds with initial expectations. It ends up hampering the pace of rapid delivery lifecycle.
To truly make the most of functional testing on the cloud, you have to reconsider the testing practices you follow.
At BrowserStack, we help thousands of customers run functional test suites on the cloud every month. Our QA and Support teams identified certain practices that, when applied, can help teams of all sizes get better builds in less time.
Here they are, compiled into a 3-part, 12-step guide for better functional testing on the cloud. Most of it has to do with feasible changes you can make in the setup, the test suites, and the solutions provider.
Part 1: The Setup
Architectures and system setups change depending on what you're optimizing for. 9 times out of 10, you'll be optimizing for speed, stability, costs, etc. These are not mutually exclusive.
A stable, cost-efficient test setup will let you:
- run tests faster,
- re-run tests faster---whether it's a single test (for identifying flaky scripts) or entire suites after a bug fix,
- reduce flakiness,
- easily identify buggy commits, and
- scale with your tests, projects, and teams.
Here's how you can optimize your setup so it meets those requirements.
1.1 Target an acceptable test execution time
The idea is to get quality builds, faster, but it's a balancing act: you have to run the optimum number of tests in a reasonably short duration of time. 10 minutes is the general norm for BrowserStack users, but it can vary depending on your testing strategy.
Aim to finish all automated tests within the targeted test execution time. This helps you plan better and identify unexpected failures (or flakiness) in the test setup.
1.2 Run your tests on every pull request (if not after every commit)
If your tests don't take forever to run, you are free to run them more often.
Cloud testing platforms charge you on the basis of your parallelization requirements (i.e., the ability to run a number of tests at the same time), and not the frequency of their usage.
To isolate bugs early in the lifecycle, run tests after every commit. This can get overwhelming, so an alternative is to run tests on every pull request.
The aim is to spread out your testing throughout the release cycle and avoid a bottleneck towards the end.
1.3 Configure your CI for efficiency
You will want to reconsider your CI setup depending on your team's requirements, the frequency of testing, the projects you test in parallel, and the number of tests you intend to run. Most teams use a master-slave configuration in CI so that the master machine handles dispatching jobs, monitoring those jobs, and reporting the results. This is because a lot of jobs make the CI machine busy and unresponsive, leading to slow and flaky tests.
Lastly, ensure your machine can handle the number of requests that your tests make when they're run in parallel.
NOTE
Network latency between your CI machine and your cloud testing platform can be minimized if your cloud vendor's data centers are close to you, geologically speaking.
.
Part 2: The Test Suites
Tests that run blazing fast on your workstation/device often take exponentially longer to run on the cloud. This is expected due to factors like network latency between your machines and testing platform, browser setup (or device startup) time, your webpage loading latencies, etc. There are a few things that you can do to speed up your tests considerably, despite those factors.
2.1 Keep the tests short
Keeping the tests short not only leads to fast execution times but also makes them less flaky (and easier to iterate or fix later).
Here's what we suggest:
- Skip testing elements that you don't need to test.
- Use Selenium requests sparingly. Each request increases your test execution time. (For instance, instead of an if-else condition to loop request until an elementID is received, try explicit wait).
- If the test case can be split into a bunch of smaller ones, split it.
2.2 Embrace parallelization
Instead of lining them up to run one after the other (sequentially), run your tests concurrently with parallelization.
Most modern testing frameworks and cloud solutions support this out-of-the-box. For instance, if you have a test script that tests 20 invalid inputs, split it into 20 smaller scripts and run them all at once.
2.3 Mix-and-match browsers to prevent queuing
Most cloud testing platforms set soft limits on their browsers and devices to meet the demand at peak times, prevent abuse, and avoid denial of service attacks. Once you hit these limits, your tests will be queued to run after the current tests are done.
To avoid hitting those soft limits, start "blending".
Blending is basically distributing your tests between the browsers--so more tests run in parallel without hitting the soft limit on any one browser.
For example, if you have 100 tests that to be run on Chrome, Firefox, and IE each, and a soft limit of 80 tests per browser type, you'll be able to run 80 tests on a browser and the remaining 20 will be queued.
To avoid the queue, you can run the first 50 tests on Chrome and Firefox at once. Once they're done, run the remaining 50 tests on Chrome and Firefox.
2.4 Use built-in debugging options
Cloud vendors offer built-in debugging assistance---such as screenshots at every request (or every failure), video recordings of the test session, Selenium logs, network logs etc. Instead of writing your own code and logic for taking screenshots and saving it locally, use those features.
Tip: Where possible, organize your tests into builds and projects, and label them appropriately. Should a test suite fail, it would be easier to isolate the bug to a build and debug faster.
Part 3: The (Cloud) Solution Provider
Cloud gives you access to more browsers (types and versions) than your local workstation could handle. It's instantly scalable, and has no maintenance requirements. The trade-off for cost-efficient, on-demand scalability, however, is network latency.
To be confident of the code quality and functionality, there are ways to test smartly on the cloud. Here are some suggestions.
3.1 Test on the right browsers and versions
Developers write code for and test on the latest version of Chrome. Now, this would be enough if all of your end-users are on the latest version of Chrome. But you'll have to do your cross browser testing across browsers and versions
- that are favored by your end users,
- that have the highest usage trends in your target markets / countries at a given time,
- that are correlated with large drops in the conversion funnel (payments or signups, etc.),
Prioritize your testing on these browsers, instead of every browser available on the cloud platform.
If you have enough parallel threads / concurrent slots with your cloud vendor, you can look at testing on additional browsers and versions without overshooting your targeted test completion time.
3.2 Test on different screen sizes
A product built for developers' use should not break on those huge monitors developers are fond of.
Your front-end developers spent a considerable amount of time adapting your website to different resolutions. It'd be a waste NOT to test them on different screen sizes. If you expect your users to view your application on different screen sizes and resolutions, be sure to test your responsive design.
3.3 Test on the right mobile devices
Cloud platforms will have plenty of real mobile devices you can test on. At BrowserStack alone, we have over 2000 iOS and Android devices. That doesn't mean you have to test on every last one of them.
Just like with browsers and versions, find the mobile devices popular in your target market at a given time. Here's a quick blog post we published a while ago, on how to find the best mobile devices to test on.
3.4 Run tests in real user conditions
A lot of top-of-the-funnel conversions (like signups) take place on mobile. For eCommerce, over 40% of purchases are made through mobile devices. A buggy mobile app can—and will—directly affect your revenue.
Test a mobile app in real-world conditions. Configure timezones, IP addresses and locations, network speeds, etc. to mimic the environment your target audience is in. To find bugs that would affect real user experience, test on real devices instead of emulators or simulators, right before release.
3.5 Set up alerting, monitoring and bug tracking
Instead of waiting around till the tests are done running, you can set-up real-time alerts to get notified exactly when and where your test / build failed.
You can do this with a webhook from your CI machine or your cloud testing platform (or their REST APIs) to get the status of your test runs and set up notifications for yourself and your team. (We have a Slack integration for that, and more are in the works).
You can streamline your workflows further by integrating your cloud platform with your bug filing tool. These would automatically create tasks in your bug tracking tools---and populate them with all the data developers will need to recreate and fix. And if your setup tracks the bugs back to specific commits, you'll know who to assign it to.
Endnote
Functional testing can be a pain. Functional testing on the cloud can be pain, but with an added sting of network latency. This blog post lists some easy optimizations that'd help you not just test better, but also make the most of your cloud testing platform.
If you still haven't made up your mind on a cloud testing platform, you should give us a try.
Register for BrowserStack Summer of learning to get more insights on running faster automated functional tests!