Generate test cases using AI
Learn how AI can assist you with creating test cases in BrowserStack Test Management.
Manually creating test cases can be complex and time-consuming. To simplify this task, BrowserStack Test Management offers AI-powered test case generation. Now, you can leverage generative AI to swiftly generate meaningful test cases based on the context you provide, accelerating your testing process and ensuring comprehensive coverage.
Key features
-
Save time:
Quickly generate test cases without the manual effort of authoring test cases. -
Enhanced coverage
You can ensure all scenarios are tested by using AI suggestions. -
Flexible input options to provide context
Provide context to generate test cases through prompts and requirement files.
How to generate AI-powered test cases
Follow these steps to generate test cases by following these steps about what you want to test.
- Navigate to the test cases list view in your project.
- Click Generate with AI at the top-right.
- On the generation screen, you need to provide input to the AI to generate test cases. You can do this by:
-
Using a prompt:
- (See annotation 1) Enter a detailed description of the test case scenarios you need. The prompt has a character limit of 30,000 characters.
-
Selecting a folder:
- (See annotation 2) You can select an existing folder containing related test cases to provide context to the AI.
- This helps avoid generating duplicate test cases and ensures a consistent writing style.
- The selected folder is also the destination folder where the generated test cases will be saved.
-
Uploading a requirements file:
- (See annotation 3) Upload a requirements document (example, a pdf file) containing detailed requirements specifications.
- This file is attached to each generated test case.
- You can provide both a prompt and a requirements document for more comprehensive input.
-
Using a prompt:
- Click Generate Test Cases (see annotation 4) to create test cases based on your prompt.
- The AI generates test cases across scenarios, with all test cases selected by default and categorized accordingly.
-
Review test cases and deselect any test cases you do not want to include.
- Click Add Test Cases to include the selected test cases in your suite.
- To discard and start over, click Start Over and choose any method to regenerate test cases.
The newly added test cases come with default field values:
- State: Set to Active.
- Automation Status: Set to Not Automated.
- Priority: Set to Medium.
- Tags: Includes the tag AI Generated.
Now, you have successfully generated and added AI-powered test cases to your test case repository, tailored to your specific requirements and context. The following image shows an example of an AI-generated test case along with the fields populated with default values.
Autofill test case details
Test Management’s AI assistant helps you autofill additional details like steps and expected results for new test cases and edit existing ones quickly and accurately. As you type, the AI infers patterns from your previous test cases and suggests how to complete the remaining fields.
While creating, editing or after generating test cases let AI assist you with filling in all the test case details based on existing test cases. Follow the below steps to generate and autofill test case details.
-
Enter Title for your test case and Click Autofill Details icon beside it.
As you begin to fill in test case fields, the AI will offer context-aware suggestions to complete the remaining fields.
- AI assistant will fill in all the details such as:
- Description, Test Steps & Expected Results, Preconditions, Priority, Type of Test Case.
- Click Create or Update to accept and save the AI-assisted changes.
We're sorry to hear that. Please share your feedback so we can do better
Contact our Support team for immediate help while we work on improving our docs.
We're continuously improving our docs. We'd love to know what you liked
We're sorry to hear that. Please share your feedback so we can do better
Contact our Support team for immediate help while we work on improving our docs.
We're continuously improving our docs. We'd love to know what you liked
Thank you for your valuable feedback!