When performing automation testing with pytest, testers sometimes have to focus on executing relevant test cases within the given time constraints.
Skipping tests in pytest is useful when you temporarily disable certain tests due to known issues, missing dependencies, or unsupported environments. There are multiple ways to skip tests in pytest, including markers, conditions, and command-line options.
Overview
Why Skip Tests in pytest?
- Skip tests that only apply to certain operating systems.
- Bypass tests when required versions aren’t available.
- Skip tests if APIs, databases, or services are down.
- Handle cases where environment variables or settings are absent.
- Skip tests during pre-commit runs or if AWS profiles are unset.
Types of Skiping Tests in pytest
- Unconditional Skipping
- Conditional Skipping with skipif
- Skipping with Custom Conditions or Fixtures
- Skipping Based on Missing Packages
- Skipping for Specific Environments or Configurations
xfail vs skip in Pytest:
skip:
Used to intentionally bypass a test that is not meant to run under certain conditions (for example, unsupported OS, missing config).
- The test is not executed at all.
xfail (expected failure):
Used when a test is expected to fail due to a known bug or unimplemented feature.
- The test runs, but failure does not count as a failure in the test report.
In these scenarios, pytest skip functionality becomes the saviour and helps testers to focus only on the relevant test cases.
What is the pytest Framework?
pytest is a powerful, feature-rich testing framework for Python that makes it easy to write simple and scalable test cases.
It is widely used for unit testing, functional testing, and even complex test automation in Python projects.
It is known for its simple syntax, rich test discovery capabilities, and powerful features like fixtures, parameterization, and detailed assertion introspection.
Read More: Pytest vs Unittest: A Comparison
Advantages of using pytest Framework
pytest provides several advantages over Python’s built-in unittest framework:
- Easy to Write and Read: No need for classes, boilerplate code, or explicit test runners.
- Automatic Test Discovery: Finds test files and functions without extra configurations.
- Better Assertions: Uses Python’s assert statement, providing clear failure messages.
- Fixtures for Setup & Teardown: Reusable, modular, and flexible setup/teardown mechanism.
- Extensibility with Plugins: Supports plugins like pytest-cov (coverage reports) and pytest-mock (mocking).
- Parallel Execution: Run tests faster with pytest-xdist.
Read More: Understanding Pytest BDD
Why Skip Tests in pytest?
Here are the reasons why skip tests in pytest:
1. OS-Specific Scenarios (Windows, Mac, Linux)
Some tests may depend on platform-specific features, libraries, or versions of Python.
For example, a test might be compatible with Linux/macOS but not Windows. In this case, it needs to be skipped in the Windows environment while being executed.
import pytest import sys @pytest.mark.skipif(sys.platform == "win32", reason="Not applicable on Windows") def test_linux_only_feature(): assert 1 == 1
2. Incompatible Python or Package Versions
Some tests may require specific Python or package versions to run. If the required version is not met, you can skip tests.
For Example: Skip test for incompatible Python version
import pytest import sys @pytest.mark.skipif(sys.version_info < (3, 8), reason="Requires Python 3.8 or higher") def test_python_version(): assert sys.version_info >= (3, 8)
Also, Some tests may rely on optional third-party packages that are not always installed.
For Example: A test is dependent on the numpy package to be installed.
The test runs only if numpy is installed.
import pytest try: import numpy except ImportError: numpy = None @pytest.mark.skipif(numpy is None, reason="Requires NumPy library") def test_numpy_feature(): assert numpy.array([1, 2, 3]).sum() == 6
3. External Resource Dependencies (APIs, Databases)
Tests should be skipped instead of failing if an API, database, or external service is down.
For Example: Skip if an API is unreachable.
import pytest import requests API_URL = "https://example.com/api/status" @pytest.fixture(scope="session") def api_available(): try: response = requests.get(API_URL, timeout=2) return response.status_code == 200 except requests.RequestException: return False @pytest.mark.skipif(not api_available(), reason="API is unavailable") def test_external_api(): response = requests.get(API_URL) assert response.status_code == 200
This ensures the test is skipped if the API is down, preventing unnecessary failures.
4. Local Environment and Configuration Issues
Some tests depend on specific local environment configurations, such as environment variables, file system paths, or available services.
For example: Skip test if the environment variable is missing.
import pytest import os @pytest.mark.skipif(not os.getenv("API_KEY"), reason="API_KEY environment variable is required") def test_api_key(): assert os.getenv("API_KEY") is not None
The test will run only if API_KEY is set.
5. Pre-Commit Checks and AWS Profiles
Before pushing code, developers often run pre-commit checks to enforce code quality, formatting, and security policies.
For example: Running pytest as a pre-commit hook.
- Install pre-commit
pip install pre-commit
- Create a .pre-commit-config.yaml file
repos: - repo: https://github.com/pytest-dev/pytest rev: main hooks: - id: pytest name: Run pytest before commit
- Install the pre-commit hook
pre-commit install
Now, pytest will automatically run before every commit, preventing bad code from being pushed
Some tests may require AWS credentials or a specific AWS profile to be set.
For example: Skip test if AWS profile is not set
import pytest import os @pytest.mark.skipif("AWS_PROFILE" not in os.environ, reason="AWS_PROFILE is not set") def test_aws_profile(): assert os.environ["AWS_PROFILE"] == "default"
Read More: Understanding Monkeypatch in Pytest
Unconditional Skipping
Sometimes, you may want to skip a test unconditionally, regardless of the environment, dependencies, or conditions. This can be useful when:
- The test is not ready for execution.
- The feature being tested is deprecated or under development.
- The test is temporarily disabled due to ongoing refactoring.
1. Skipping an Individual Test Function
If you want to conditionally decide to skip a test inside the test function (instead of using a decorator), you can call pytest.skip().
Example: Skip based on a custom condition inside the Test.
import pytest def test_runtime_skip(): pytest.skip("Skipping this test at runtime") assert True # This will never execute
2. Skipping All Tests in a Suite
If you want to skip all tests in a file, call pytest.skip() in the test suite.
Example: Skip all tests in a file.
import pytest pytest.skip("Skipping this test file", allow_module_level=True) def test_will_not_run(): assert True # This is never executed
Read More: How to Generate Pytest Code Coverage Report
3. Markers for Skipping Tests
- Using @pytest.mark.skip for skipping tests
Example: Skip a test that is not ready
import pytest @pytest.mark.skip(reason="This test is under development") def test_unfinished_feature(): assert False # Placeholder for future logic
The test is skipped every time, and pytest displays the reason.
- Skipping a Test Class
You can skip an entire test class if all its tests should be disabled.
Example : Skip an entire test class
import pytest @pytest.mark.skip(reason="Skipping all tests in this class") class TestDeprecatedAPI: def test_old_functionality(self): assert False def test_legacy_feature(self): assert False
Conditional Skipping
In pytest, conditional skipping allows you to skip tests dynamically based on factors like:
- Python version
- Operating system
- Missing dependencies
- Environment variables
- Custom application conditions
1. Skip Based on Python Version
Use @pytest.mark.skipif(condition, reason=”…”) to skip tests based on specific conditions.
Example: Skip if Running on Python Version Below 3.8
import pytest import sys @pytest.mark.skipif(sys.version_info < (3, 8), reason="Requires Python 3.8 or higher") def test_python_version(): assert sys.version_info >= (3, 8)
2. Skip Tests on Specific Platforms
Learn how to skip tests on specific platforms like Windows, Linux, or macOS using Pytest’s skip markers.
Example : Skip Tests on Windows
import pytest import sys @pytest.mark.skipif(sys.platform == "win32", reason="Does not work on Windows") def test_non_windows_feature(): assert True
3. Handle Missing Imports and Dependencies
Use @pytest.mark.skipif to check if a package is available and skip the test if it is missing.
Example: Skip Test if numpy is Not Installed
import pytest try: import numpy numpy_installed = True except ImportError: numpy_installed = False @pytest.mark.skipif(not numpy_installed, reason="Requires numpy package") def test_numpy_function(): arr = numpy.array([1, 2, 3]) assert arr.sum() == 6
Read More: How to use pytest_addoption
XFail in Pytest
In pytest, xfail (short for expected failure) is used when a test is known to fail due to a bug, incomplete feature, or external issue. Instead of failing the test suite, xfail marks the test as expected to fail.
- Using @pytest.marl.xfail to mark expected failures
The simplest way to use xfail is to mark a test that is expected to fail.
Example : Test expected to fail
import pytest @pytest.mark.xfail(reason="Feature not implemented yet") def test_unfinished_feature(): assert 1 + 1 == 3 # Incorrect assertion
The test runs but does not count as a failure in the test summary.
- Using @pytest.mark.xfail(condition, reason=”…”) for Conditional XFail
One can conditionally mark a test as xfail based on system conditions like OS, Python version, or package versions.
Example: Mark a Test as xfail on Python < 3.8
import pytest import sys @pytest.mark.xfail(sys.version_info < (3, 8), reason="Fails on Python < 3.8") def test_requires_new_python(): assert sys.version_info >= (3, 8)
The test is expected to fail only on Python < 3.8.
- Using xfail(strict=True) to Fail the Test If It Unexpectedly Passes
By default, xfail does not fail the test suite if the test unexpectedly passes.
However, if you want the test to fail if it passes, use strict=True.
Example: Failing if a supposedly broken test passes
import pytest @pytest.mark.xfail(reason="Known bug in function", strict=True) def test_broken_logic(): assert 2 + 2 == 5 # Incorrect assertion
If the test unexpectedly passes, pytest fails the test suite.
- Using pytest.xfail() Inside a Test Function
Instead of marking a test before execution, you can conditionally call pytest.xfail() at runtime.
Example : Skip a test if a bug exists
import pytest def test_runtime_xfail(): if True: # Replace with your condition pytest.xfail("This test is currently broken") assert 1 + 1 == 2
The test starts but is immediately marked as an expected failure.
XFail vs Skip in pytest
Here are the differences between xfail and pytest:
Feature | @pytest.mark.xfail (XFail) | @pytest.mark.skip (Skip) |
---|---|---|
Purpose | Marks tests expected to fail due to a bug, incomplete feature, or external issue | Completely skips a test that should not run at all |
Test Execution | The test runs but it is recorded as an expected failure | The test is not executed when using skip |
Failure Impact | If the test fails, it does not count as a failure in the test suite (unless strict=True) | The Test is simply ignored |
Pass Behavior | If the test unexpectedly passes, it is marked as XPASS (unless strict=True, which makes it fail) | The test never runs so it cannot pass |
Conditionally Applied? | Yes, using @pytest.mark.xfail(condition, reason=”…”) | Yes, using @pytest.mark.skipif(condition, reason=”…”) |
Use Case Example | Known bugs, incomplete features, failing dependencies | Unsupported platforms, missing environment variables, external API downtime |
Alternative Function | pytest.xfail(“reason”) inside the test body | pytest.skip(“reason”) inside the test body |
Best Practices for Skipping Tests
Skipping tests should be done transparently and with clear reasoning so future developers can understand why a test was skipped. Poorly documented skips can lead to hidden bugs or unnecessary skipped tests.
1. Always Provide a clear reason for skipping a test
Best Practice is always explaining why the test is skipped, ideally linking an issue.
import pytest @pytest.mark.skip(reason="Feature not implemented yet, see issue #123") def test_feature():
2. Use @pytest.mark.skipif for Conditional Skips Instead of @pytest.mark.skip
Use skipif when a test might be valid in certain conditions (for example, OS, dependencies, configurations).
Only skip when necessary instead of skipping unconditionally.
import pytest import sys @pytest.mark.skipif(sys.platform == "win32", reason="Feature not supported on Windows") def test_linux_feature(): assert True
3. Use pytest.skip() Inside the Test for Runtime Conditions
If a skip depends on a runtime check, use pytest.skip() instead of pytest.mark.skip.
Skip inside a test when a condition can only be checked at runtime
Example: Skip if an API key is missing
import pytest import os def test_requires_api_key(): if "API_KEY" not in os.environ: pytest.skip("Skipping: API_KEY is missing") assert True
4. Use pytest.importorskip() for Missing Dependencies
Instead of manually checking the missing dependencies or package imports, let pytest.importorskip() handle missing dependencies automatically.
import pytest pandas = pytest.importorskip("pandas", reason="pandas is required for this test") def test_pandas_function(): df = pandas.DataFrame({"A": [1, 2, 3]}) assert df.shape == (3, 1)
5. Make Skipped Tests Visible in the Test Report
By default, skipped tests are hidden in the output. Use pytest -rs to show why tests were skipped. Always review skipped tests to ensure they are necessary and not hiding bugs.
Example:
Pytest -rs
Sample output:
SKIPPED [1] test_example.py: test_linux_feature – Feature not supported on Windows
SKIPPED [1] test_example.py: test_requires_api_key – Skipping: API_KEY is missing
6. Avoid Overusing Skips, as they can hide problems
Skipping too many tests can lead to missed bugs and undetected regressions.
Before skipping a test, ask:
- Can the issue be fixed instead of skipped?
- Does the test need conditional execution instead?
- Is there an xfail case instead of skipping?
Skip only when absolutely necessary. Use xfail if tracking a known failure is more appropriate.
Conclusion
The pytest framework provides a flexible and powerful way to write, organize, and execute tests efficiently. Skipping tests is an essential feature that helps handle platform dependencies, missing configurations, external resource availability, and known issues without breaking the entire test suite. However, improper use of skips can lead to hidden bugs, overlooked failures, and poor test coverage.
To ensure nothing goes unnoticed, integrate your test suite with BrowserStack Test Observability, a smart, visual platform that helps you track skipped tests, analyze trends, and uncover issues faster with actionable insights.