Selenium with Gauge
Your guide to running Selenium Webdriver tests with Gauge on BrowserStack.
Introduction
BrowserStack gives you instant access to our Selenium Grid of 3000+ real devices and desktop browsers. Running your Selenium tests with Gauge on BrowserStack is simple. This guide will help you in:
- Running your first test
- Integrating your tests with BrowserStack
- Marking tests as passed or failed
- Debugging your app
Prerequisites
- BrowserStack Automate account with at least 4 parallel tests. You can sign-up for a free trial or purchase a plan.
- You need to have BrowserStack Username and Access key, which you can find in your account settings.
-
Gauge should be installed and in
$PATH
. Latest version of Gauge can be downloaded from the website. -
Maven should be installed and in
$PATH
. Latest version of Maven can be downloaded from the website. - Gauge-maven-plugin to execute specs manage dependencies using Maven.
Running your first test
- Edit or add capabilities in the W3C format using our W3C capability generator.
- Add the
seleniumVersion
capability in your test script and set the value to4.0.0
.
To run Selenium tests with Gauge on BrowserSatck Automate, follow the below steps:
- Clone the gauge-browserstack repo on GitHub with BrowserStack’s sample test, using the following command:
git clone https://github.com/browserstack/gauge-java-browserstack.git cd gauge-java-browserstack
- Install the dependencies using the following command:
mvn compile
- Update
default.properties
file within thegauge-java-browserstack/env/default/
directory with your BrowserStack credentials as shown below:
# The credentials associated with your BrowserStack account. BROWSERSTACK_USERNAME = YOUR_USERNAME BROWSERSTACK_ACCESS_KEY = YOUR_ACCESS_KEY
- Now, execute your first test on BrowserStack using the following command:
mvn test
- View your test results on the BrowserStack Automate dashboard.
Details of your first test
Following is the sample Gauge test case that you ran above. The test searches for “BrowserStack” on Google, and checks if the title of the resulting page is “BrowserStack - Google Search”.
Search
======
The following scenario performs a Google search and makes sure that
the results match.
Search for BrowserStack
-------------------
tags: search, smoke
* On the homepage
* Search for term "BrowserStack"
* Make sure the page title is "BrowserStack - Google Search"
package com.browserstack.gauge.pages;
import static org.junit.Assert.assertEquals;
import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import com.thoughtworks.gauge.Step;
public class SearchSpec {
private final WebDriver driver;
private WebElement element;
public SearchSpec() {
this.driver = DriverFactory.getDriver();
}
@Step("On the homepage")
public void navigateToHomePage() {
driver.get("https://www.google.com");
}
@Step("Search for term <term>")
public void searchFor(String term) {
element = driver.findElement(By.name("q"));
element.sendKeys(term);
element.submit();
}
@Step("Make sure the page title is <term>")
public void checkPageTitle(String term) {
assertEquals(term, driver.getTitle());
}
}
Integrating your tests with BrowserStack
The integration of your test with BrowserStack is working with the help of DriverFactory.java
file which contains the methods to configure and create the connection with BrowserStack as shown below:
package com.browserstack.gauge.pages;
import com.thoughtworks.gauge.AfterSpec;
import com.thoughtworks.gauge.BeforeSpec;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.remote.DesiredCapabilities;
import org.openqa.selenium.remote.RemoteWebDriver;
import java.net.MalformedURLException;
import java.net.URL;
import java.util.concurrent.TimeUnit;
public class DriverFactory {
private static final String USERNAME = System.getenv("BROWSERSTACK_USERNAME");
private static final String AUTOMATE_KEY = System.getenv("BROWSERSTACK_ACCESS_KEY");
private static final String URL = "https://" + USERNAME + ":" + AUTOMATE_KEY + "@hub-cloud.browserstack.com/wd/hub";
private static WebDriver driver;
public static WebDriver getDriver() {
return driver;
}
@BeforeSpec
public void setUp() {
try {
DesiredCapabilities caps = new DesiredCapabilities();
// Capabilities from environment
caps.setCapability("browser", System.getenv("BROWSER"));
caps.setCapability("browser_version", System.getenv("BROWSER_VERSION"));
caps.setCapability("name", "Bstack-[Gauge] Sample Test");
caps.setCapability("build", System.getenv("BUILD"));
caps.setCapability("os", System.getenv("OS"));
caps.setCapability("os_version", System.getenv("OS_VERSION"));
// Hardcoded capabilities
caps.setCapability("browserstack.debug", "true");
URL remoteURL = new URL(URL);
driver = new RemoteWebDriver(remoteURL, caps);
driver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS);
} catch (MalformedURLException e) {
System.out.println(e.toString());
}
}
@AfterSpec
public void tearDown() {
driver.close();
driver.quit();
}
}
Marking tests as passed or failed
BrowserStack provides a comprehensive REST API to access and update information about your tests. Shown below is a sample code snippet which allows you to mark your tests as pass or fail based on the assertions in your Gauge test cases.
// Method to mark test as pass / fail
public static void mark() throws URISyntaxException, UnsupportedEncodingException, IOException {
URI uri = new URI("https://YOUR_USERNAME:YOUR_ACCESS_KEY@api.browserstack.com/automate/sessions/<session-id>.json");
HttpPut putRequest = new HttpPut(uri);
ArrayList<NameValuePair> nameValuePairs = new ArrayList<NameValuePair>();
nameValuePairs.add((new BasicNameValuePair("status", "<passed/failed>")));
nameValuePairs.add((new BasicNameValuePair("reason", "<Your reason goes here>")));
putRequest.setEntity(new UrlEncodedFormEntity(nameValuePairs));
HttpClientBuilder.create().build().execute(putRequest);
}
You can find the full reference to our REST API.
Debugging your app
BrowserStack provides a range of debugging tools to help you quickly identify and fix bugs you discover through your automated tests.
Text logs
Text Logs are a comprehensive record of your test. They are used to identify all the steps executed in the test and troubleshoot errors for the failed step. Text Logs are accessible from the Automate dashboard or via our REST API.
Visual logs
Visual Logs automatically capture the screenshots generated at every Selenium command run through your JUnit tests. Visual Logs help with debugging the exact step and the page where failure occurred. They also help identify any layout or design related issues with your web pages on different browsers.
Visual Logs are disabled by default. In order to enable Visual Logs you will need to set browserstack.debug
capability to true
.
caps.setCapability("browserstack.debug", "true");
Sample Visual Logs from Automate Dashboard:
Video recording
Every test run on the BrowserStack Selenium grid is recorded exactly as it is executed on our remote machine. This feature is particularly helpful whenever a browser test fails. You can access videos from Automate Dashboard for each session. You can also download the videos from the Dashboard or retrieve a link to download the video using our REST API.
browserstack.video
capability to false
.
caps.setCapability("browserstack.video", "false");
In addition to these logs BrowserStack also provides Raw Logs, Network Logs, Console Logs, Selenium Logs, Appium Logs and Interactive session. You can find the complete details to enable all the debugging options.
Next steps
Once you have successfully run your first test on BrowserStack, you might want to do one of the following:
- Run multiple tests in parallel to speed up the build execution
- Migrate existing tests to BrowserStack
- Test on private websites that are hosted on your internal networks
- Select browsers and devices where you want to test
- Set up your CI/CD: Jenkins, Bamboo, TeamCity, Azure, CircleCI, BitBucket, TravisCI, GitHub Actions
We're sorry to hear that. Please share your feedback so we can do better
Contact our Support team for immediate help while we work on improving our docs.
We're continuously improving our docs. We'd love to know what you liked
We're sorry to hear that. Please share your feedback so we can do better
Contact our Support team for immediate help while we work on improving our docs.
We're continuously improving our docs. We'd love to know what you liked
Thank you for your valuable feedback!