Integrate your test suite with BrowserStack
BrowserStack’s Pytest SDK supports a plug-and-play integration. Run your entire test suite in parallel with a few steps!
Prerequisites
- An existing automated test suite.
- Pytest v4+, Python3 and Pip3 is installed on your machine.
Integrate Your Test Suite with BrowserStack
Set BrowserStack credentials
Save your BrowserStack credentials as environment variables. It simplifies running your test suite from your local or CI environment.
Install BrowserStack Pytest SDK
Execute the following commands to install BrowserStack Pytest SDK for plug-and-play integration of your test suite with BrowserStack.
python3 -m pip install browserstack-sdk
browserstack-sdk setup --framework "pytest" --username "noelmjohn_A1EoqT" --key "YM3US2pAnNRXThJAUEhE"
$env: BROWSERSTACK_USERNAME "YOUR_USERNAME"
$env: BROWSERSTACK_ACCESS_KEY "YOUR_ACCESS_KEY"
Update your BrowserStack config file
When you install the SDK, a browserstack.yml
config file is created at the root level of your project. This file holds all the required capabilities to run tests on BrowserStack.
Specify platforms to test on
Set the browsers you want to test under the platforms
object from the list of supported browsers.
BrowserStack Reporting
You can leverage BrowserStack’s extensive reporting features using the following capabilities:
sessionName
is the name of your test sessions and is automatically picked from your test class/spec name. It doesn’t need to be set manually when using the BrowserStack SDK.
Use additional debugging features
By default, BrowserStack provides prettified session logs, video recording on every failed command, and a video of the entire test. Additionally, you can enable the following features:
Use Automate Turboscale
Update browserstack.yml file with selected capabilities
Copy the following code snippet and replace contents of browserstack.yml
file in the root folder of your test suite.
Run your test suite
Your test suite is now ready to run on BrowserStack. Use the following command to execute your tests on BrowserStack using the pytest SDK.
If you prefer not to use the SDK, you can integrate your test suite manually.
Setup authentication
Set environment variables for BrowserStack credentials:
# Set these values in your ~/.zprofile (zsh) or ~/.profile (bash)
export BROWSERSTACK_USERNAME=YOUR_BROWSERSTACK_USERNAME
export BROWSERSTACK_ACCESS_KEY=YOUR_BROWSERSTACK_ACCESSKEY
It is recommended that you store your credentials as environment variables and use those environment variables in your test script.
Connect CDP Endpoint
Connect to the CDP endpoint at BrowserStack as shown in the following example:
def run_session(playwright):
clientPlaywrightVersion = str(subprocess.getoutput('playwright --version')).strip().split(" ")[1]
desired_cap['client.playwrightVersion'] = clientPlaywrightVersion
cdpUrl = 'wss://hub-ft.browserstack.com/playwright?caps=' + urllib.parse.quote(json.dumps(desired_cap))
browser = playwright.chromium.connect(cdpUrl)
page = browser.new_page()
Update your test cases to read BrowserStack credentials from environment variables. Update the Selenium hub URL to the BrowserStack remote hub URL: wss://hub-ft.browserstack.com/playwright
Migrate your test cases
After you have set up authentication in your test scripts, add configurations like browser-OS combinations and desired test statuses. For initial migration, run your build using Chrome or Firefox to isolate issues and simplify debugging. After achieving stability, expand to cross-browser testing.
//Add the following options to your test script
"environments": [
{
"os": "Windows",
"name": "Test on Chrome latest on Windows",
"browser": "chrome",
"browser_version": latest
},
]
Organize tests
Use the following capabilities for naming your tests and builds. This ensures effective debugging, test reporting, and build execution time analysis.
"capabilities": {
"projectName": "Pytest Browserstack",
"buildName": "browserstack-build-parallel",
},
Capability | Description |
---|---|
buildName |
CI/CD job or build name. For example, Website build #23 , staging_1.3.27
|
projectName |
Name of your project. For example, Marketing Website
|
- Use a new
buildName
name every time you run your test cases. This ensures that sessions are logically grouped under a unique build name and helps you monitor the health of your test suite effectively.
- A build can only have a maximum of 1000 tests and post that a new build gets created with a
-1
suffixed to the original build name.
Mark test as Passed or Failed
To mark whether your test has passed or failed on BrowserStack, use the JavaScript executor in your test script. You can mark a test as passed or failed based on your test assertions.
The arguments passed in the JavaScript method for setting the status and the corresponding reason of the test are status
and reason
:
-
status
accepts eitherpassed
orfailed
as the value. -
reason
accepts a string value.
def mark_test_status(status, reason, page):
page.evaluate("_ => {}",
"browserstack_executor: {\"action\": \"setSessionStatus\", \"arguments\": {\"status\":\"" + status + "\", \"reason\": \"" + reason + "\"}}");
def log_contextual_info(desc, loglevel, page):
page.evaluate("_ => {}",
"browserstack_executor: {\"action\": \"annotate\", \"arguments\": {\"data\":\"" + desc + "\", \"level\": \"" + loglevel + "\"}}");
Set up debugging capabilities
- Enable visual logs and automatic screenshot capture at every Selenium command by setting the
debug
capability. - By default, Console Logs with log level
errors
are enabled. Utilize theconsoleLogs
capability to enable various log levels, includingwarnings
,info
,verbose
,errors
, anddisable
.
"capabilities": {
"debug": "true",
"consoleLogs": "info",
},
We're sorry to hear that. Please share your feedback so we can do better
Contact our Support team for immediate help while we work on improving our docs.
We're continuously improving our docs. We'd love to know what you liked
We're sorry to hear that. Please share your feedback so we can do better
Contact our Support team for immediate help while we work on improving our docs.
We're continuously improving our docs. We'd love to know what you liked
Thank you for your valuable feedback!