More often than not, testers have to deal with a large codebase comprising hundreds or even thousands of tests. Most tests validating website or app functionality must be rerun on different platforms (devices, browsers, operating systems).
However, running so many tests on different platforms can be immensely difficult, not to mention tedious and error-prone, without the right strategy and tech. The best and easiest way to test under such conditions is to use parallel testing.
This article will explore how to run parallel tests with a particularly popular and useful CI/CD tool like CircleCI.
What is Parallel Testing?
In parallel testing, you test different modules or applications on multiple browser-device-OS combinations simultaneously rather than one after another. In a scenario where there are two versions of software available, and both must be examined to ensure stability and compatibility, it is easier to run tests on two versions simultaneously, get results and detect bugs faster
Parallel testing reduces execution time and effort, resulting in faster time to delivery. It is particularly useful in the case of cross browser testing, compatibility testing, localization, and internalization testing.
Parallel CI Test Automation Workflow
Implementing parallel CI test automation workflow is essential to enhance the efficiency of continuous integration pipelines. By dividing test suites into smaller chunks and running them parallelly across multiple environments, teams can achieve faster feedback, reduced bottlenecks, and overall improvement in productivity.
Steps to Implement Parallel CI Test Automation Workflow:
- Analyze Your Test Suite: Break down your test suite to identify slow-running and dependent tests. Categorize them based on execution time and priority.
- Enable Parallel Execution: Configure your CI pipeline to run multiple jobs in parallel. Most CI tools, like CircleCI, Jenkins, or GitHub Actions support this feature with minimal setup.
- Split Tests Dynamically: Use test splitting strategies (e.g., time-based, size-based, or file-based) to evenly distribute tests across available environments. Tools like CircleCI’s test splitting or custom scripts can automate this.
- Optimize Environment Setup: Ensure test environments are consistent and lightweight for faster setup times. Tools like Docker or pre-built environments can help streamline this process.
- Monitor and Fine Tune: Continuously monitor test execution times, identify bottlenecks, and refine your splitting logic or infrastructure to maximize efficiency.
Read More: How to perform Test Automation with CircleCI
Benefits of Parallel CI Test Automation
- Speed: Reduces test cycle time significantly, providing rapid feedback to developers.
- Scalability: Easily handles larger test suites as projects grow.
- Reliability: Isolates flaky tests and minimizes risks of failures affecting the entire suite.
Read More: How to avoid Flaky Tests?
How to run Tests in Parallel with CircleCI
Before exploring parallel testing with CircleCI, it’s important to place a quick note on why it’s important to run parallel tests on real browsers and devices.
While it may seem easier to download and test on emulators and simulators, they are riddled with inadequacies that prevent them from providing a reliable test environment. For example, they cannot mimic low network conditions, low battery, or incoming calls. In general, they cannot replicate all features of real browsers and devices, making the results of tests run on them prone to serious error.
Instead, use a real device cloud like the one offered by BrowserStack. With 3500+ real browsers and devices on the cloud for instant, on-demand access, you can run extensive manual and automated tests on whatever browser, device or OS the project requires.
Simply sign up for free, log in, select the browser-device-OS combinations to be tested on, feed the relevant tests scripts and let the tests run themselves via automation.
Test on Real Browsers & Devices
Now, on to CircleCI.
CircleCI provides us with capabilities to run tests in parallel and achieve faster completion.
When you run your tests with CircleCI, they usually run on a single VM. Now, the more tests, the more time it will take to complete on one machine. In order to reduce this time, you can run tests in parallel by distributing them across multiple separate executors.
In order to do this, you need to specify the parallelism level to define how many separate executors should be spun up for your job. After that, for separating the test files, you can either use CircleCI CLI or environment variables to configure each machine individually.
How to Set up Parallelism in CircleCI
Test suites are usually defined at the job level in your .circleci/config.yml file. So let’s first set up the parallelism level on your config file.
This parallelism key defines how many independent executors CircleCI will create to run your tests. To do this, you just need to add this key before calling the steps to run your tests.
Depending on your CircleCI’s pricing plan, the parallelism value can be increased to match your requirements.
To run a job’s steps in parallel, set the parallelism key to a value greater than 1.
# ~/.circleci/config.yml version: 2 jobs: test: docker: - image: cimg/<language>:<version TAG> auth: username: yourdockerhub-user password: $DOCKERHUB_PASSWORD parallelism: 4
Now, in order to split the actual tests, the next step is to use the CircleCI CLI (Command Line Interface) to call additional commands to split the tests.
Run Parallel Tests on Real Devices
Example of Parallel Test Execution with CircleCI
Running tests in parallel using CircleCI helps speed up your CI/CD pipeline by splitting your tests across multiple executors (machines).
Below is a step-by-step setup and explanation of how to configure CircleCI to run tests in parallel:
Step 1: Create or Modify .circleci/config.yml
In CircleCI, the configuration file (config.yml) defines how your jobs and workflows run. To set up parallel test execution, you need to modify this file.
Here’s an example config.yml file that demonstrates parallel test execution:
version: 2.1 jobs: test: docker: - image: cimg/python:3.8 # Use a Docker image with your test environment parallelism: 4 # Number of parallel executors (jobs) for this job steps: - checkout - run: name: Install dependencies command: pip install -r requirements.txt - run: name: Run tests command: pytest tests/ # Run tests using pytest workflows: version: 2 test: jobs: - test # Running the 'test' job defined above
Step 2: Understanding the Configuration
1. Version and Jobs:
- The version: 2.1 indicates you’re using CircleCI 2.1 configuration syntax.
- The jobs section defines individual tasks in the pipeline. Here, we have a job called test.
2. Docker Executor:
- This job uses the Docker executor (docker), which specifies a Docker image to run your tests. In this case, the image used is cimg/python:3.8, which provides a Python 3.8 environment.
3. Parallelism:
- The parallelism: 4 line is crucial—it defines how many parallel instances of this job CircleCI should run. Here, we’ve set it to 4, meaning CircleCI will run 4 parallel jobs to execute your tests.
4. Steps:
- checkout: This step checks out your code from the repository.
- run: These steps install dependencies (pip install -r requirements.txt) and run the tests using pytest on the tests/ directory.
5. Workflows:
- The workflows section defines how jobs are organized and executed. In this case, the test workflow runs the test job.
Step 3: Enable Test Splitting (Optional)
If you have a large number of tests, it can be beneficial to split them across the parallel jobs to ensure each job has an equal load. CircleCI provides a built-in test-splitting feature to do this automatically. You can split your tests up alphabetically (by name) (which is by default), by size, or by using historic timing data.
Here’s how to enable test splitting:
jobs: test: docker: - image: cimg/python:3.8 parallelism: 4 steps: - checkout - run: name: Install dependencies command: pip install -r requirements.txt - run: name: Split and run tests in parallel command: | circleci tests split --split-by=timing --test-metadata=tests/test_metadata.json pytest tests/
The circleci tests split command splits your test files based on timing data from previous runs, so each job gets a roughly equal number of tests.
Step 4: Push to CircleCI
Once you’ve set up the config.yml, push your changes to your Git repository. CircleCI will automatically detect the configuration file and trigger the pipeline.
Step 5: View Parallel Test Results
- After your tests run, you can see the results in the CircleCI dashboard. CircleCI will display the execution times and any test failures or successes across all parallel jobs.
- You can also integrate with BrowserStack Test Management to streamline the planning, execution, and tracking of test cases. Additionally, BrowserStack Test Observability offers a centralized view of your testing efforts by consolidating the results of all parallel jobs to enhance debugging.
How to split test files using CircleCI CLI
Splitting tests is an important practice to improve the efficiency of your CI/CD pipeline. By breaking your test suite into smaller, more manageable parts, you can take advantage of parallel execution to speed up testing and streamline your development process.
CircleCI supports automatic test allocation across your containers. The allocation is based on filename or class name, depending on the requirements of the test-runner used. It requires the CircleCI CLI, which is automatically injected into your build at run-time.
The commands that will be used for splitting the test files are glob and split.
- A glob is basically a pattern that will match a list of filenames to a specific pattern that you specify.
- The split command, by default, will split your tests according to a list of filenames, but you can also use other strategies such as splitting it by timing data or file size.
By using these two commands, you can split your tests into multiple executors that are independent of each other as you can see from the image above.
Globbing the tests
In order to assist in defining your test suite, the CircleCI CLI supports globbing test files using the following patterns:
- * matches any sequence of characters (excluding path separators)
- ** matches any sequence of characters (including path separators)
- ? matches any single character (excluding path separators)
- [abc] matches any character (excluding path separators) against characters in brackets
- {foo, bar,…} matches a sequence of characters if any of the alternatives in braces matches
So to glob test files, pass one or more patterns to the CircleCI tests glob command.
Now, let’s look at how to achieve this in our configuration file.
circleci tests glob "tests/unit/*.java" "tests/functional/*.java"
You can use the glob command in your config file with the echo command as shown below, to check the results of pattern match:
# ~/.circleci/config.yml version: 2 jobs: test: docker: - image: cimg/<language>:<version TAG> auth: username: mydockerhub-user password: $DOCKERHUB_PASSWORD # context / project UI env-var reference parallelism: 4 steps: - run: command: | echo $(circleci tests glob "foo/**/*" "bar/**/*") circleci tests glob "foo/**/*" "bar/**/*" | xargs -n 1 echo
Splitting the tests
The split command, by default, will split your tests according to a list of filenames but you can also use other strategies such as splitting it by timing data or file size. Let’s discuss all of the splitting techniques.
Split By Name
This is the default splitting technique. So in case you don’t specify a method using the –split-by flag, CircleCI expects a list of filenames/classnames and splits them alphabetically by test name.
There are a few ways to provide this list:
- Create a text file with test filenames
circleci tests split test_filenames.txt
- Provide a path to the test files
circleci tests split < /path/to/items/to/split
- Pipe a glob of test files
circleci tests glob "test/**/*.java" | circleci tests split
Once the list of names has been provided, CLI looks up the number of available containers, along with the current container index, and then it uses the deterministic splitting algorithms to split the test files across all available containers.
Split By Time
This is the best way to optimize your test suite across a set of parallel executors. This ensures that the tests are split in the evenest way, leading to overall reduced test time.
In order to split by test timings, use the –split-by flag with the timings split type. The available timings data will then be analyzed and your tests will be split across the parallel-running containers as evenly as possible.
circleci tests glob "**/*.go" | circleci tests split --split-by=timings
On each successful run of a test suite, the timings data is saved from the directory specified by the path in the store_test_results step. This timings data mentions how long each test took to complete per filename or classname, depending on the language used.
One thing to be considered is if you do not use store_test_results, there will be no timing data available for splitting your tests.
The CLI expects both filenames and classnames to be present in the timing data produced by the test suite. By default, splitting defaults to the filename, but you can specify classnames by using the –timings-type flag.
cat my_java_test_classnames | circleci tests split --split-by=timings --timings-type=classname
For partially found test results, you can automatically assign a random small value to any test you did not find timing data for. You can override this assigned value to a specific value with the –time-default flag.
circleci tests glob "**/*.rb" | circleci tests split --split-by=timings --time-default=10s
Additionally, If you need to manually store and retrieve timing data, use the store_artifacts step.
Splitting by filesize
You can also split tests by filesize. To do this, use the –split-by flag with the filesize split type.
circleci tests glob "**/*.go" | circleci tests split --split-by=filesize
How to split tests using environment variables
CircleCI provides us with two environment variables that can be used in place of the CLI to configure each container individually.
They are CIRCLE_NODE_TOTAL and CIRCLE_NODE_INDEX.
- CIRCLE_NODE_TOTAL is the total number of parallel containers being used to run your job.
- CIRCLE_NODE_INDEX is the index of the specific container that is currently running. The environment variables ensure us full control over parallelism.
How to run split tests
Now after globbing and splitting the tests, it’s time to run them. To combine test grouping with test execution, you can consider saving the grouped tests to a file and then passing this file to your test runner.
circleci tests glob "test/**/*.rb" | circleci tests split > /tmp/tests-to-run bundle exec rspec $(cat /tmp/tests-to-run)
The contents of the file /tmp/tests-to-run will be different in each container, based on $CIRCLE_NODE_INDEX and $CIRCLE_NODE_TOTAL.
Applying test parallelism using CircleCI regardless of your underlying technical approach is always beneficial because it will speed up your overall execution time and in some cases, can even reduce the time by half.
Utilize the steps in this article to run parallel tests in CircleCI, thus getting faster results and integrating speed into your CI/CD pipeline with ease.
Test Reporting & Management
Test reporting and management are key components of any CI/CD pipeline, ensuring that test results are easily accessible and actionable. When running automated tests, especially in parallel, managing and consolidating the results can be challenging. This is where BrowserStack Test Management comes in, offering a streamlined way to manage and report test results.
BrowserStack Test Management enables the integration of automated test run results through:
- Uploading JUnit-XML or BDD-JSON reports generated by TestNG using simple bash commands.
- BrowserStack Test Observability via the BrowserStack SDK.
This ensures that your test results are automatically pushed to BrowserStack, providing a unified view of all your test executions.
Steps to Generate and Upload JUnit-XML Reports with TestNG:
1. Execute TestNG Test Cases: First, you can initiate your TestNG project and execute test cases.
Here’s an example of a basic TestNG test case:
package com.browserstack; import org.testng.Assert; import org.testng.Reporter; import org.testng.annotations.Test; public class BStackDemoTest { @Test public void testUpdateUserEmail() { Reporter.log("[[PROPERTY|id=TC-1]]\n", true); // Test logic } @Test public void addToCart() { Reporter.log("[[PROPERTY|id=TC-2]]\n", true); // Test logic } @Test public void checkout() { Reporter.log("[[PROPERTY|id=TC-3]]\n", false); // Test logic } }
2. Generate JUnit-XML Reports: TestNG automatically generates JUnit-XML reports in the test-output folder after test execution.
Here’s a sample JUnit-XML report:
<?xml version="1.0" encoding="UTF-8"?> <testsuite hostname="Sh*****" failures="0" tests="3" name="com.browserstack.BStackDemoTest" time="0.007" errors="0" timestamp="2023-04-20T10:46:28 IST" skipped="0"> <testcase classname="com.browserstack.BStackDemoTest" name="testUpdateUserEmail" time="0.000" /> <testcase classname="com.browserstack.BStackDemoTest" name="addToCart" time="0.006" /> <testcase classname="com.browserstack.BStackDemoTest" name="checkout" time="0.001" /> </testsuite>
3. Upload the JUnit-XML Report to BrowserStack: To upload the generated test report to BrowserStack Test Management, you need to set up the necessary environment variables in your terminal:
export TEST_MANAGEMENT_API_TOKEN="*************28a42" export TEST_MANAGEMENT_PROJECT_NAME="<Project Name>" export JUNIT_XML_FILE_PATH="<Report Path>" export TEST_RUN_NAME="<Test Run Name>"
Once the variables are set, you can upload the report using a simple bash command with curl:
curl -k -X POST https://test-management.browserstack.com/api/v1/import/results/xml/junit \ -u $TEST_MANAGEMENT_API_TOKEN \ -F project_name="$TEST_MANAGEMENT_PROJECT_NAME" \ -F "file_path=@$JUNIT_XML_FILE_PATH" \ -F test_run_name="$TEST_RUN_NAME"
After the upload completes, you will receive a confirmation message along with a URL to access the test run report:
{"message":"File uploaded successfully.","url":"https://test-management.browserstack.com/projects/<project id>/test-runs/<test run id>","success":true}
4. View Test Results in BrowserStack Test Management: Once the test report is uploaded, you can log in to BrowserStack Test Management & view the results.
Navigate to your project, click on “Test Runs” and open the test run generated from your automated test execution.
All the test cases and their respective results will be displayed here, making it easy to analyze, debug, and collaborate.
Read More: Top 15 Debugging Tools
Conclusion
Applying test parallelism with CircleCI is a powerful way to enhance the efficiency of your testing process. By running multiple tests parallelly, you can significantly reduce the execution time, sometimes cutting it in half, allowing for faster feedback and quicker releases. Integrating parallel tests into your CI/CD pipeline not only speeds up the development cycle but also improves resource utilization and scalability.
By following the steps in this article, you can easily set up parallel testing in CircleCI & integrate it with BrowserStack to view results in a centralized dashboard. BrowserStack Test Management provides a seamless & intuitive way to track, analyze, and collaborate on test results. Both BrowserStack Test Management and BrowserStack Test Observability have native integration, and for efficient debugging, you can use Test Observability. This will help optimize testing workflows, leading to faster and more reliable software delivery.
Frequently Asked Questions
1. What is the concurrency limit in CircleCI?
The concurrency in CircleCI is the number of tasks that are being executed at any point in time. For example, CircleCI’s free plan provides a concurrency limit of 30, meaning you can run up to 30 tasks at the same time.
2. What is the maximum runtime for CircleCI?
The maximum runtime depends on the plan you’re using. Jobs have a maximum runtime of 1 (Free), 3 (Performance), or 5 (Scale) hours depending on the pricing plan.