Integrate Your Test Suite with BrowserStack
Integrate BrowserStack into your test suite using the BrowserStack SDK — a plug-and-play solution that takes care of all the integration steps for you!
Integrate Your Test Suite with BrowserStack
Prerequisites
Integration steps
Complete the following steps to integrate your Python test suite using BrowserStack SDK.
Install BrowserStack Python SDK
Execute the following commands to install BrowserStack python SDK for plug-and-play integration of your test suite with BrowserStack.
python3 -m pip install browserstack-sdk
browserstack-sdk setup --username "YOUR_USERNAME" --key "YOUR_ACCESS_KEY"
python3 -m pip install browserstack-sdk
browserstack-sdk setup --username "YOUR_USERNAME" --key "YOUR_ACCESS_KEY"
Unable to install BrowserStack SDK?
If you cannot install BrowserStack SDK due to sudo
privilege issues, create a virtual environment and execute the adjacent installation commands again.
Linux:
python3 -m venv env
source env/bin/activate
Windows
python3 -m venv env
env\Scripts\activate
Create your BrowserStack config file
Once you have installed the SDK, create a browserstack.yml
config file at the root level of your project. This file holds all the required capabilities to run tests on BrowserStack.
Set platforms to test on
Set the browsers/devices you want to test under the platforms
object. Our config follows W3C formatted capabilities.
Platform | Browser |
---|---|
Linux | Firefox |
Linux | Chrome |
Linux | Edge |
Do you want to dynamically configure platforms?
To dynamically configure platforms across different tests, you can comment out the platforms
capability while still passing platform-specific capabilities.
BrowserStack Reporting
You can leverage BrowserStack’s extensive reporting features using the following capabilities:
buildIdentifier | Description | Generated build name on BrowserStack dashboard |
---|---|---|
${BUILD_NUMBER} (Default) | If build is triggered locally, an incremental counter is appended. If build is triggered with CI tools, CI generated build number is appended. |
bstack-demo 1 bstack-demo CI 1395 |
${DATE_TIME} | The timestamp of run time is appended to the build. | bstack-demo 29-Nov-20:44 |
Advanced use cases for Build Names
Custom formatting of Build Name
Prefix buildIdentifier
with desired characters, for example #
or :
buildName: bstack-demo
buildIdentifier: '#${BUILD_NUMBER}'
Re-run tests in a build
You can re-run selected tests from a build using any of the following options:
Option 1: Set the existing build name in the BROWSERSTACK_BUILD_NAME
variable and prepend it to your test run command to re-run tests in the same build:
MacOS/Linux:
BROWSERSTACK_BUILD_NAME=“bstack-demo 123” browserstack-sdk
Windows Powershell:
$env:BROWSERSTACK_BUILD_NAME=“bstack-demo 123”; browserstack-sdk
Windows cmd:
set BROWSERSTACK_BUILD_NAME=“bstack-demo 123” && browserstack-sdk
Option 2: Set the build name as a combination of buildName
and buildIdentifier
, as seen on the dashboard, and set buildIdenitifier
as null
:
buildName: bstack-demo 123
buildIdentifier: null
Option 3: Set the buildIdentifier
as the build number or time of the required build as seen on the dashboard:
buildName: bstack-demo
buildIdentifier: 123
Use additional debugging features
By default, BrowserStack provides prettified session logs, screenshots on every failed selenium command, and a video of the entire test. Additionally, you can enable the following features:
Create browserstack.yml file
Copy the following code snippet and create browserstack.yml
file in the root folder of your test suite.
Run your test suite
Prepend browserstack-sdk
before your existing run commands to execute your tests on BrowserStack using the Python SDK.
Before
python <path-to-test-files>
After
browserstack-sdk python <path-to-test-files>
Non-SDK integration
If you prefer not to use the SDK, you can integrate your test suite manually.
Setup authentication
Set environment variables for BrowserStack credentials:
# Set these values in your ~/.zprofile (zsh) or ~/.profile (bash)
export BROWSERSTACK_USERNAME=YOUR_USERNAME
export BROWSERSTACK_ACCESS_KEY=YOUR_ACCESS_KEY
It is recommended that you store your credentials as environment variables and use those environment variables in your test script.
Update your test script
a. Use BrowserStack credentials and update the Selenium hub URL
import os
# Other imports and desired_cap definition goes here
BROWSERSTACK_USERNAME = os.environ.get("YOUR_USERNAME") or BROWSERSTACK_USERNAME
BROWSERSTACK_ACCESS_KEY = os.environ.get("YOUR_ACCESS_KEY") or BROWSERSTACK_ACCESS_KEY
URL = "https://{}:{}@hub-ft.browserstack.com/wd/hub".format(BROWSERSTACK_USERNAME, BROWSERSTACK_ACCESS_KEY)
driver = webdriver.Remote(command_executor=URL, options=options)
# Rest of the test case goes here
Update your test cases to read BrowserStack credentials from environment variables. Update the Selenium hub URL to the BrowserStack remote hub URL: https://hub-ft.browserstack.com/wd/hub
Migrate your test cases
After you set up authentication in your test scripts, you can now add configurations, such as adding browser-OS combinations, test suite organization details, test status that you want to track, and then run your tests.
# Add the following options to your test script
from selenium.webdriver.chrome.options import Options
options = Options()
bstack_options = {
'os' : 'Linux',
'browserVersion': 'latest'
}
options.set_capability('bstack:options', bstack_options)
options.set_capability('browserName', 'Chrome')
Organize tests
# Testing the home page
bstack_options = {
'projectName': 'Marketing Website v2',
'buildName': 'alpha_0.1.7',
'sessionName': 'Home page must have a title'
}
options.set_capability('bstack:options', bstack_options)
Use the following capabilities for naming your tests and builds. This ensures effective debugging, test reporting, and build execution time analysis.
Capability | Description |
---|---|
buildName |
CI/CD job or build name. For example, Website build #23 , staging_1.3.27
|
sessionName |
Name for your test case. For example, Homepage - Get started
|
projectName |
Name of your project. For example, Marketing Website
|
- Use a new
buildName
name every time you run your test cases. This ensures that sessions are logically grouped under a unique build name and helps you monitor the health of your test suite effectively.
- A build can only have a maximum of 1000 tests and post that a new build gets created with a
-1
suffixed to the original build name.
Mark test as Passed or Failed
To mark whether your test has passed or failed on BrowserStack, use the JavaScript executor in your test script. You can mark a test as passed or failed based on your test assertions.
The arguments passed in the JavaScript method for setting the status and the corresponding reason of the test are status
and reason
:
-
status
accepts eitherpassed
orfailed
as the value. -
reason
accepts a string value.
For marking test as passed
driver.execute_script('browserstack_executor: {"action": "setSessionStatus", "arguments": {"status":"passed", "reason": "Yaay! my sample test passed"}}')
For marking test as failed
driver.execute_script('browserstack_executor: {"action": "setSessionStatus", "arguments": {"status":"failed","reason": "Oops! my sample test failed"}}')
Set up debugging capabilities
- Enable visual logs and automatic screenshot capture at every Selenium command by setting the
debug
capability. - By default, Console Logs with log level
errors
are enabled. Utilize theconsoleLogs
capability to enable various log levels, includingwarnings
,info
,verbose
,errors
, anddisable
. - Capture the browser’s performance data, such as network traffic, latency, HTTP requests, and responses in a HAR format, by setting the
networkLogs
capability.
bstack_options = {
'debug' : 'true', # to enable visual logs
'networkLogs' : 'true', # to enable network logs to be logged
'consoleLogs' : 'info', # to enable console logs at the info level. You can also use other log levels here
}
options.set_capability('bstack:options', bstack_options)
We're sorry to hear that. Please share your feedback so we can do better
Contact our Support team for immediate help while we work on improving our docs.
We're continuously improving our docs. We'd love to know what you liked
We're sorry to hear that. Please share your feedback so we can do better
Contact our Support team for immediate help while we work on improving our docs.
We're continuously improving our docs. We'd love to know what you liked
Thank you for your valuable feedback!