Introduction
Sainsbury’s is one of the oldest, most popular supermarket chains in the UK. From groceries and clothing to homewares, electricals, and more, Sainsbury’s – and its portfolio brands – cater to millions of customers that access their web and mobile apps from a diverse range of browsers and devices.
The leadership wanted the Groceries Online team to increase release frequency from once a month to one release every two weeks to retain competitive advantage. To achieve this, the team moved from manual to test automation on BrowserStack to speed up testing. The result? The team now deploys as per the leadership directive, and has significantly improved coverage, quality, and team productivity!

Scale testing to meet release goals
For Sainsbury’s Groceries Online, a flawless front-end is non-negotiable. Any unnoticed UI bugs can potentially disrupt the buyer journey, negatively impacting user experience, revenue, and brand reputation in the long run. Accordingly, the team tests across a range of browsers and devices popular amongst their customers to ensure a consistent, flawless front-end.
They started off by testing manually on BrowserStack, which was sufficient for a once-a-month release cycle. However, it was difficult to scale manual testing to the newly prescribed release goals while ensuring coverage across multiple browser-device-OS combinations. It was also extremely time-consuming.
Manual regression testing for every release – every two weeks – takes almost 5 days, even when split among different testers. “It took up too much time. We couldn’t concentrate on other sprint work. It just wasn’t feasible to spend 10 days on regression a month to be able to release twice,” says Saradha Balaji, Lead Test Automation Engineer at Sainsbury’s. Besides speed, the team had other challenges too.
“We cover end-to-end testing as part of regression, so it can frustrate anyone testing the same scenario across 10-20 browsers. There is a high chance of mistakes and oversight. Testers might simply skip tests. These issues don’t happen in automation,” says Saradha. Accordingly, they started automating their tests on Cypress. But without parallelization capabilities, there wasn’t a significant saving in testing time.
Moreover, testing was assigned to QA teams on a rotational basis. If a resource who knows about regression is on leave, they had to find a new resource and spend additional time on knowledge transfer. Some scenarios required time-consuming data set up too. Soon, resource dependency and knowledge transfer also became blockers, negating the benefits of automation and slowing down releases.