Cross-browser testing for compatibility
Testing for Fragmentation: Why do you need cross-browser testing for compatibility?

“Testing for Fragmentation” is a blog series. It takes a look at the market data on devices, platforms, browsers, etc. in use today, how this diversity comes into play during software development and testing—and what 2 million+ developers on BrowserStack do to account for it.

In this post, we look at browser internals, the differences among browsers, and the need for cross-browser testing.


“Has anyone spent several hours to get the UI of their application render correctly in all browsers? How do you tackle the frustration when simple styles tend to work in one browser but not others… and by the end of the day you invent a hack to handle it (in some cases that too does not happen), only after wasting your time?”

That was posted on Stack Overflow 11 years ago. And it hasn't become less tedious with time. Cross-browser testing still is the bane of every front-end developer and QA engineer on the planet.


The reason is browser fragmentation, and the only way to cope is by testing for cross-browser compatibility.

Why is browser compatibility an issue in 2020...

...despite W3C’s homogenizing open standards? Or despite the fact that most of the browsers have adopted Chromium/Blink as their go-to browser engine?

The answer is in the question itself: those standards are open to interpretation.

Browsers with the largest market shares are developed by different vendors. Each of these vendors have different design and development philosophies. This influences how they go about adding features, implementing support for new web technologies, and ultimately, the end-user experience of browsers.

Consider this: Despite the same underlying (and open-source) browser engine, the end-user experience turns out to be very different on Chrome and UC Browser. This is because Chrome focuses on delivering performance and integrating with Google’s ecosystem. On the other hand, UC Browser focuses on efficiency through page compression and lower data consumption.

Even the open-source browser projects are not afraid to “break things to make things.”

“Chromium’s mission is to advance the web, and in some cases, we find it beneficial to make a breaking change in order to do that.”

—Blink principles of web compatibility

For the average web developer, that one ‘breaking change’ could mean hours of debugging. You may decide not to use a new, radical browser tech in your website UI, but you will have to account for those changes while developing anyway.

For there are layers to a browser, and each of them affects the Quality of Experience (QoE) you deliver to the end-user. Let’s take a look:

  1. Browser engine: Not to be confused with JavaScript engine (which is another core component), the browser engine takes in HTML, reads and renders it. Blink, Gecko, and WebKit are examples of browser engines.

    While open-source, these engines are different in their operation—even the operating systems they support. iOS, for instance, will only run WebKit browsers. So Chrome on Android and Chrome on iOS are going to be different at the very core.

  2. Features: Vendors often add unique features to their browsers.

    Consider Apple’s Safari, which has Intelligent Tracking Prevention. It’s a machine-learning algorithm that understands the difference between first-and-third party cookies. Based on the cookies’ purpose [for instance, saving session details vs cross-site tracking for ad targeting purposes], ITP deletes them at various intervals from users’ devices.

    Even today, ITP continues to trouble online publishers who monetise their site content through behaviourally-targeted ads.
  3. Support for popular web technology: Which web technology to support + when and how best to implement it—these decisions are made by popular opinion within the browser’s contributor community.
  4. Update frequency: The speed with which contributing engineers release new features or security patches is different for each browser.

    Chrome gets updated every 6-8 weeks; Safari, once or twice a year. So if two or more browsers were to pick up ‘adding support for resolution feature’, it may take a while to be universally available.


Okay. Browsers are different. Why should I care?

Now, individually, the differences are minuscule and unlikely to affect a large percentage of your site’s traffic. But your site experience depends on how well you accounted for all the differences in browser layers. The probability of an unexpected UI bug is high.

The more you scale/expand your reach to new markets, the larger will be the impact of unexpected UI breaks in production. And this affects not just the bottom line, but the audience sentiment as well.

A website/web app has to work flawlessly for its audience regardless of the browser they are on. Lack of browser compatibility can have serious consequences, as was the case with:

  1. Safari ITP impacted the retention of the Adobe Experience Cloud’s opt-out cookie. Apple automatically deleted the cookie that was set to prevent reporting on data, effectively opting the user back into tracking.

  2. Bootstrap, a ground-breaking front-end development framework that saves countless hours of developers’ time, had to create a whole wall of outstanding browser bugs; in the hopes that browser vendors will pick up and resolve these issues in the near future.

  3. A simple November 2019 Chrome update, supposed to make the browser become more resource-efficient, ended up wreaking havoc—especially for those running Chrome on a Windows Server (in an enterprise network setup).

Employees are being left gawping at blank screens as Chrome suspends their browser throughout the day. According to system administrators speaking to ZDNet, hundreds and thousands of employees couldn't use Chrome to access the internet following the update.”

Outside of your dev environment, not everyone has caught on to the latest versions of Chrome and macOS. Plenty of users are on browsers other than Chrome—or versions of popular browsers older than the latest releases. You can chalk it up to familiarity and/or resistance to change (along with other reasons).

There’s only one way to minimize browser errors: by testing continuously for cross-browser compatibility.

Continuous testing for cross-browser compatibility

It’s definitely not enough to test on whichever browser you’re developing on and release it into the wild. To deliver quality at speed, you have to test smarter. And to do this, you’ll need:

  1. A list of relevant browsers which will give you over 80% coverage in the market(s) of your choice—updated at regular intervals.

  2. Thorough risk analysis to figure out the features most likely to break, and on which browsers and devices. This post by Hylke de Jong, an Automation Testing Engineer at Wehkamp.nl, details why you should be testing on devices with high risk—even if they aren’t used by a significant percentage of your traffic.

  3. A solid delivery pipeline that lets your engineers continuously test and debug every unit, build, and end-to-end user-flow across each relevant browser/device.

Remember this: Cross-browser compatibility isn’t about how many people can enjoy the entire site/app experience. It’s about minimizing the number of people who can’t. For those who have a reputation to maintain, the stakes are even higher. For them, cross-browser compatibility becomes a matter of delivering quality, consistently.

One of them is Nicolás Bevacqua, a JS consultant and Blink contributor. He states that he cares about cross-browser compatibility because:

“..the fact that people were seldom using anything other than Chrome (to access my blog) gave me a good cover story for not pouring any effort into resolving long-standing (albeit minor) CSS rendering issues in Firefox and Safari. However, 20% of visitors were getting a less than ideal experience, which ultimately wasn’t something I could scoff at.

It didn’t speak well of me to have these blatant rendering bugs lying around in my blog when visited through popular non-Chrome browsers. It wasn’t tidy.”


Recommended reading:

Mobile testing strategy