We kept hearing the same thing from early users: "This is cool, but I need to run it from CI."
Fair point. Running tests manually from a dashboard is fine for exploring, but real teams need test automation baked into their deployment pipeline. So we built it.
Now you can trigger AI testing directly from your CI/CD workflow. Same intelligent agents, same adaptive test execution - just integrated where it matters most.
What we shipped
Three things that work together:
API Keys - Generate keys from your dashboard, use them to trigger tests programmatically. Keys are hashed on our end (we never store the raw key), and you can revoke them anytime.
The /api/v1/run endpoint - POST to this with your test plan ID and you're off. It kicks off the test and returns a job ID immediately so your pipeline doesn't block.
Webhooks - Configure a URL and we'll POST the results when your test finishes. HMAC signed so you can verify it's actually us.
Setting it up
The whole thing takes maybe 5 minutes.
1. Get an API key
Go to Settings in your dashboard and create a new key. Copy it somewhere safe because you won't see it again.
2. Add it to GitHub Actions
Drop this in .github/workflows/testlab.yml:
name: Test-Lab
on:
push:
branches: [main]
pull_request:
jobs:
test:
runs-on: ubuntu-latest
steps:
- name: Run Test-Lab
run: |
curl -X POST https://test-lab.ai/api/v1/run \
-H "Authorization: Bearer ${{ secrets.TESTLAB_API_KEY }}" \
-H "Content-Type: application/json" \
-d '{"testPlanId": 123, "buildId": "${{ github.sha }}"}'Store your key in GitHub Secrets as TESTLAB_API_KEY. Don't commit it to the repo (obviously).
3. Set up a webhook (optional)
If you want to know when tests finish, add a webhook URL to your project settings. We'll POST something like this:
{
"event": "run.completed",
"jobId": "abc-123",
"status": "completed",
"result": {
"passed": 5,
"failed": 0
}
}The request includes an X-TestLab-Signature header so you can verify it's legit. Standard HMAC-SHA256 stuff.
Why buildId matters
That buildId field in the API call links your test run to a specific commit. This means you can:
- See all test runs for a given commit
- Track when regressions were introduced
- Compare results across deployments
We show this in the Builds page. It's way easier to debug when you can see "oh, tests started failing after commit abc123."
Quick vs Deep mode
You can pass "testType": "quickTest" or "testType": "deepTest" in the API call.
Quick mode is faster and cheaper. Good for PRs where you want fast feedback. Deep mode is more thorough. We'd recommend running that on your main branch or before releases.
Why AI testing in CI beats traditional test automation
Most test automation in CI pipelines is fragile. Selenium scripts break when the UI changes. Cypress tests need constant selector updates. Your team spends hours fixing flaky tests instead of shipping features.
AI testing changes this equation:
- No scripts to maintain - Describe tests in plain English, AI handles execution
- Self-healing tests - UI changes don't break your pipeline
- Real user behavior - AI navigates your app like an actual user would
- Faster feedback loops - Quick mode for PRs, Deep mode for releases
This is test automation that actually scales with your team. Add more tests without adding more maintenance burden.
What's next
We're working on a few things:
- A GitHub Action that wraps all this up nicely
- PR status checks that update automatically
- Slack integration for the webhook receiver
If there's something specific you need for your CI setup, let us know. We're building this based on what people actually need.
Check out the full docs for more examples including GitLab CI, or hit us up if you run into issues.
