“Testing for Fragmentation” is a blog series. It takes a look at the market data on devices, platforms, browsers, etc. in use today, how this diversity comes into play during software development and testing—and what 2 million+ developers on BrowserStack do to account for it.
In this post, Hylke, from Wehkamp, suggests a risk based testing strategy to decide which devices to test on.
Whenever the subject of fragmentation comes up in the context of mobile app testing, the focus is mainly on knowing what to test, and where. While the former pivots on knowing your app, its functionality, and expected behavior, the latter revolves around market research and insights about the devices your customers commonly use. But exhaustively testing every part of your app on every possible device is not exactly feasible. It would require tremendous effort and time. But there is a workaround for such a situation - risk based testing.
Consider the right parameters
Risk based testing is a method in which you focus on, and give priority to, the parts of your app and/or the devices that have the highest risk associated with them. Generally, people tend to think that this applies to the most used parts of your app and the most used devices. While this is not entirely wrong, there are more nuances to consider. For starters, how do we quantify risk? The generic formula to calculate risk is:
Risk = Probability * Impact
The formula is quite straightforward. The amount of risk is based on two variables. Probability is basically how likely it is that something is going to happen. The second variable is impact, or how bad is it when something breaks. Let’s look at an example to see how this works out in practice.
Screen resolution is a good metric to differentiate mobile devices by. In the case of iPhones, most are rather similar in size and resolution. But the iPhone SE stands out, with its modest 640 x 1136 pixels size. It’s an older phone and so general usage numbers are usually low. When only going by that parameter, the iPhone SE isn’t a highly relevant target device for our testing.
However, it is important to factor in the phone’s fairly low screen resolution; especially the low 640 pixels width. This is one of the main reasons why most UI-related aspects break regularly on this device. There simply isn’t enough room to display everything, causing buttons or elements to disappear or overlap, thus limiting user interaction with those key elements.
So, the probability of things breaking on the iPhone SE is quite high. The impact, of course, depends on what functionalities are being rendered unusable. Suppose two buttons overlap, and both usually can’t be tapped, it is a serious issue. The iPhone SE, in this case, has a high risk of potential bugs. This makes the iPhone SE an interesting, and important, device to run tests on.
Think beyond devices
When only looking at devices, we tend to make generalized assumptions - that all functionality is important, and every test should be run on all of them. But assumptions are detrimental, especially when the context is (containing) risk. To figure out which functionality within your app should receive more attention, the risk based testing approach outlined above can be used.
For instance, we run a webshop and one of the most important parts of our customer journey is the checkout. The reason is simple: if customers can’t pay, we won’t make any money - there’s your high impact. Probability is an interesting part of the equation in this case; ideally, the probability of our checkout page glitching is low given its importance and the focus it gets. However, we built a hybrid app, meaning roughly 90% is native iOS, but the rest uses our mobile website in a WebView. The checkout is part of this 10%. This means that we suddenly have three more variables to take into account - native iOS, WebView, and the mobile website; all with their own set of quirks and focus areas. So the probability of checkout-related issues just went up, making the checkout a high-risk area of our app, and as such, a very interesting target for testing.
This works both ways. Parts with lower risk can either be skipped or put in a different (lower priority) test suite.
This also really helps with insight into why you do certain things. People outside of quality assurance are quick to point towards areas they think are important and should be tested. They mean well, but many times, this is based on gut feeling. With the risk formula, you can logically explain your approach. More importantly, you can recognize when there is negligible added value in additional testing.
Fragmentation in release versions
In general, people think that fragmentation is only on the device/browser/OS level, but it can also happen between different builds of the same app. Not all users update automatically. You could force them to update, but that’s a rather heavy measure to take. Customers generally don’t like being forced to do something. A forced update will immediately reflect in your App Store/Play store reviews; usually not in a good way.
This implies that, at any given point in time, you might have multiple versions of your app running in production. All with their own set of features, functionalities, and bug (fixes). This adds an extra layer of complexity to the already complex puzzle that is fragmentation testing.
This is also where monitoring comes in. Sure, you can run your tests in production, but then you’re still constrained by the devices you choose to run your tests on. Monitoring gives you a test pool with unlimited devices and OS versions. Obviously, I can't stress enough on the importance of having a robust monitoring and altering process in place, to detect major issues in time. Though it is a comprehensive task, once set up correctly, it gives you access to a very interesting set of data.
For instance, we can see which build has a high crash rate. But more importantly, it also gives you more detailed information about the device, OS, app version and which class caused the crash. That last part is especially crucial as it brings an immediate focus on the potential source of a bug. You can also extrapolate trends from this information. For instance, is there a device that is over-represented in the crash data or is there a build that stands out with more crashes than others? This is very valuable information to adapt your test strategy and approach.
Use crash data as valuable test input
This idea of testing based on risk is a good way of making informed decisions for device selection and functionality to test. If you supplement this selection with the crash data of your app, what you get is a remarkably targeted, efficient test suite.
It’s important to understand that this is an extension of your test strategy. Of course, you start with testing the most used devices and most used/important functionality. But once you’ve covered that, it’s a really nice addition that can give you a lot of pointers for high-risk parts of your app.
And this also works the other way around. If there are parts of the app or devices that never show failed tests or crashes, remove them from the test suite. A lot of testers freak out at the thought, but it’s the most sensible thing to do. A test that keeps running green over and over and over again looks nice in the dashboards, but overall it doesn’t add much value. And since we’ve almost always short on time, this is one shortcut that is definitely worth taking.
Recommended reading: