Test Plans
Write effective AI-powered tests using natural language
Test Plans
Test plans are natural language descriptions of what to test. Test-Lab's AI agents interpret your instructions and execute them like a human tester would.
Writing Effective Test Plans
Be Specific
Good test plans clearly state:
- What to do (actions)
- What to verify (acceptance criteria)
❌ Bad: "Test the login"
✅ Good: "Go to /login, enter email 'test@example.com' and password 'Test123!',
click the login button, and verify the user is redirected to the dashboard"Include Acceptance Criteria
List specific things to verify:
Go to the checkout page and verify:
1. The cart summary shows the correct items
2. The total price is calculated correctly
3. The "Place Order" button is visible and enabled
4. Entering an invalid card number shows an error messageUse Natural Language
Write like you're explaining to a human tester:
Navigate to the blog section. Find an article and click on it.
Verify the article page loads with:
- A title
- Author name
- Publication date
- Article content
- A comments sectionTest Modes
Quick Mode
- Single agent execution
- Faster turnaround (~2-5 minutes)
- Lower compute usage
- Best for: frequent runs, iteration, smoke tests
Deep Mode
- Multiple agents working together
- More thorough exploration (~5-10 minutes)
- Higher compute usage
- Best for: critical flows, pre-release testing
Deep tests consume more credits than Quick tests. See Pricing for details.
Test Plan Structure
Simple Test
Go to https://example.com and verify the homepage loads with a navigation menuMulti-Step Test
1. Navigate to the signup page
2. Fill in the registration form with valid data
3. Submit the form
4. Verify the confirmation email message appears
5. Check that the user can access the dashboardTest with Edge Cases
Test the search functionality:
Happy path:
- Search for "laptop" and verify results appear
- Click a result and verify the product page loads
Edge cases:
- Search with empty query and verify appropriate message
- Search for "xyznonexistent123" and verify "no results" message
- Search with special characters like "<script>" and verify it's handled safelyLinking to Projects
Associate test plans with projects to:
- Automatically include the project URL
- Organize related tests
- Enable project-level webhook notifications
Running Test Plans
From Dashboard
- Go to Test Plans
- Click Run on any test plan
- Choose Quick or Deep mode
- View results when complete
Via API
curl -X POST https://test-lab.ai/api/v1/run \
-H "Authorization: Bearer tl_xxxxx" \
-H "Content-Type: application/json" \
-d '{"testPlanId": YOUR_TEST_PLAN_ID, "testType": "quickTest"}'Multiple Plans
Run multiple test plans at once:
curl -X POST https://test-lab.ai/api/v1/run \
-H "Authorization: Bearer tl_xxxxx" \
-H "Content-Type: application/json" \
-d '{"testPlanIds": [ID_1, ID_2, ID_3]}'Default Test Type
Set a default test type per plan:
- Plans default to Quick mode
- Override per-plan in settings
- Override at runtime via API
Results
Test results include:
- Pass/Fail status
- Acceptance Criteria breakdown (each verified item)
- Steps taken with screenshots
- Issues found with severity (major/minor)
- AI Reasoning explaining decisions
Tips for Better Tests
- One flow per plan - Keep tests focused on a single user journey
- Be explicit - Don't assume the AI knows your app's context
- Include URLs - Specify paths like
/loginor/dashboard - List verifications - Numbered lists help track what's checked
- Handle dynamic data - Use patterns like "verify a product is shown" rather than specific content