Cross Browser Testing
Here’s what you need to know to understand browser testing, how to do it right, and its significance for developers and teams trying to build more browser-agnostic websites.
What is Cross Browser Testing?
Cross Browser testing is a type of non-functional testing that lets you check whether your website works as intended when accessed through:
- Different Browser-OS combinations i.e., on popular browsers like Firefox, Chrome, Edge, Safari—on any of the popular operating systems like Windows, macOS, iOS and Android.
- Different devices i.e., users can view and interact with your website on popular devices—smartphones, tablets, desktops and laptops etc.
- Assistive Tools i.e., the website is compatible with assistive technologies like screen readers for individuals who are differently abled.
It’s about shipping releases that are as browser-agnostic as possible, which is key to delivering a uniform user experience on a diverse, ever-growing range of browsers/devices.
Why is Cross Browser Testing Important?
Imagine that you’re trying to access a site that archives every bongo cat meme in existence. Let’s say you’re doing it for the first time from your first ever MacBook Air.
You open Safari, type the URL, press Enter, and wait for it to load. When it does, none of the GIFs are loading. Buttons and text are all over the page. You check your connectivity and reload, just to see the same screen.
In the end, you’ll likely do one of two things–assume that the site has an issue and leave to return later, or assume that the site is broken and leave to find an alternative.
Browser vendors follow Open Web Standards, but they have their own interpretations of it. Since they each render HTML, CSS, and JavaScript in unique ways, thoroughly debugging your website’s source code is not enough to ensure that your website will look and behave as intended on different browsers (or different versions of a single browser).
So it falls to web developers to abstract browser differences. Cross browser testing helps with that by pinpointing browser-specific compatibility errors so you can debug them quickly. It helps ensure that you’re not alienating a significant part of your target audience–simply because your website does not work on their browser-OS.
What Features are Analyzed in a Browser Test?
Compatibility testing includes everything, but you may not always have the time for that.
To do it right, product teams constrain their testing with a test specification document (test specs) which outlines broad essentials—a list of features to test, what browsers/versions/ platforms to test on in order to meet the compatibility benchmark, test scenarios, timelines, and budget.
You can categorize the features that will undergo testing like this:
- Base Functionality: To ensure that basic functionality works on most browser-OS combinations. For example, you could be testing to verify that:
- All dialogs boxes and menus are working as intended
- All form fields accept inputs after validating them correctly
- Website handles first-party cookies (and features like personalization that are dependent on them) correctly
- Seamless touch input for mobiles or tablets
- Design: This ensures that the website’s appearance—fonts, images, and layout—matches the specifications shared by the Design team.
- Accessibility: Accounts for compliance with Web Content Accessibility Guidelines (WCAG) to enable differently-abled users to access the website.
- Responsiveness: Verifies that design is fluid and fits different screen sizes/orientations.
How Do I Select Browsers for Testing?
The sheer number of browsers, devices, and operating systems out there make it impossible to build for and test on every browser-OS combination that may exist. A more realistic goal is to focus your testing efforts towards maximizing your website’s reach within your target market. To do this, you’ll need to lock down the most critical browsers and versions:
- Based on popularity: Select the 10-20 most popular or commonly used browsers. Pick the top two platforms—like Android and iOS. This is to maximize your reach in any target market. This is typically what B2C (consumer facing) websites optimize for.
- Based on analysis: Look at your website’s traffic stats as captured by analytics tools (like Google Analytics or Kissmetrics) and break them down by device/browser. The aim is to find out:
- Which browser-OS combinations are most commonly used by your target audience
- What devices your website is generally viewed on
- On the basis of these findings, pick the browsers-OS combinations that are most popular with your end-users. A simple rule of thumb is to prioritize testing on any browser-OS that gets over 5% share of traffic.1
In order to make an informed decision specific to your target audience, refer to your traffic stats and combine those insights with our Test on The Right Devices report—which compiles browser-OS and device usage data in different markets.
The decision of which browsers and platforms to test on is usually in the hands of Business and Marketing teams (or the client). Goals defined by these teams help focus the product/testing team’s efforts in areas that will be most rewarding with least effort.
How is Cross Browser Testing Done?
Now that you’ve got the essentials covered, you can get around to running a test. Here’s a quick walkthrough of the steps involved:
- Establish a baseline: Before you begin cross browser testing, run all the design and functionality tests on your primary browser-usually Chrome. This will give you an idea of how the website was originally intended to look and behave.
- Create a testing plan and pick the browsers to test on: Use the test specification document to outline exactly what you’ll test. Then, as outlined in the segment above, pick browser-OS combinations to test on based on popularity and site traffic analysis.
- Execution—Automated vs Manual: Manual testing needs human testers to sequentially act out test scenarios. Automated testing ‘automates’ human interactions via code. A single test script, written by professional QAs using automation tools like Selenium, can execute a test scenario on multiple different browsers, as many times as needed. With precise error-reporting, bugs are easier to find and debug. Manual testing has room for (human) error. Depending on the website and scenarios that need to be tested, it can take anywhere between a few hours to several weeks to complete. Modern product teams allocate manual testers to exploratory testing-discovering UX pain points that a user might encounter while engaging with a touchpoint. For instance, a correctly-coded checkout form that doesn’t save form input on reload. The rest of the tests can be automated for quick, repeatable execution and near-instant feedback.
- Infrastructure: To account for website behavior when browsers are on different operating systems, you’ll need different devices. There are several ways to go about setting up your testing infrastructure: You can use emulators/simulators/virtual machines (VMs) and install browsers on them for testing. This approach is inexpensive, but note that a.) it’s not easily scalable, and b.) test results are unreliable on virtual mobile platforms (Android and iOS). Alternatively, if you have the resources to procure real devices and maintain their integrity over time, you can set up a device lab of your own. Another way is to use a cloud-based testing infrastructure (like Browserstack’s Live) to run your tests on a remote lab of secure devices and browsers—at a fraction of the cost of setting up your own device lab.
Once the tests are executed, results are shared across teams (using bug filing tools like Jira, Trello, GitHub, etc.). This keeps members of cross-functional teams on the same page and lets them work collaboratively on fixing issues.
When is Cross Browser Testing Done?
Depending on your role and workflow, you could be running cross-browser tests:
- During Development: Developers in Continuous Integration pipelines test new features to make sure they’re cross-browser compatible before pushing the changes to production.
- In Staging/Pre-Release: QA teams do this for every Release Candidate to make sure that no browser compatibility issues crop up in the latest version of the website.
Who Does Cross Browser Testing?
The short answer: Anyone who designs/develops for the Open Web.
You don’t have to know coding to make use of interactive cross browser testing tools. BrowserStack Live, for instance, is also used by marketers and web designers, who are quickly testing landing pages/new designs for cross-browser rendering and responsiveness.
Usually, QA teams execute test scenarios on multiple browsers to make sure the build meets browser compatibility benchmarks. UI teams run cross browser tests to find out how the website front-end fares on different devices and orientations.
Summary
Let’s quickly recap the 7 broad steps that are involved in cross browser testing:
- Identify which features you’ll test and write steps to specify the scenarios.
- Identify the browsers and platforms—either by popularity or site traffic analysis—that you’ll test on.
- Pick how you’ll execute the test scenarios—manually or automatically.
- Set up devices/browsers you’ll test on (or connect with a cloud-based provider).
- Execute test scenarios on browsers with the highest share of traffic, then move on to outliers.
- Document and share the test results with teams who can debug/fix issues.
- Continuously run cross browser compatibility tests to ensure that no bugs were missed.
Let’s be honest—if all browser vendors followed open web standards uniformly, cross browser compatibility testing won’t be needed at all. But we live in a world where browsers compete for market share. The onus is on you to deliver an end-user experience that’s consistently delightful across a multitude of browsers and devices.