Chapter 12
Cursor + QA Teams
Quality assurance has always been the bottleneck. Developers ship features faster than QA can validate them. Test coverage suffers. Regression suites become unmaintainable. Cursor doesn't just speed up test writing—it changes what's possible.
ℹ️The QA Transformation
12.1 AI-Assisted Test Case Generation
From Hours to Minutes
Traditional scenario: Product delivers a feature spec. Developers implement in 3 days. QA spends 5 days writing tests, 3 days implementing them, 2 more debugging. The math doesn't work.
Cursor changes the equation.
Example: Unit Tests from Function Code
We have a payment validator that needs comprehensive testing:
// src/payment/validator.js
function validatePaymentAmount(amount, currency, userTier) {
if (amount <= 0) throw new Error('Amount must be positive');
const limits = {
basic: { USD: 1000, EUR: 900 },
premium: { USD: 10000, EUR: 9000 },
};
const maxAmount = limits[userTier]?.[currency];
if (!maxAmount) throw new Error('Invalid tier or currency');
if (amount > maxAmount) throw new Error('Amount exceeds limit');
return true;
}Prompt to Cursor:
Generate Jest unit tests for @src/payment/validator.js
Requirements:
- Test happy path for each tier
- Test boundary conditions (at limit, over limit)
- Test error cases: negative, zero, invalid currency/tier
- Use descriptive test names
- Aim for 100% coverageCursor generates:
describe('validatePaymentAmount', () => {
test('accepts valid basic tier payment under limit', () => {
expect(() => validatePaymentAmount(500, 'USD', 'basic'))
.not.toThrow();
});
test('accepts payment at exact limit', () => {
expect(() => validatePaymentAmount(1000, 'USD', 'basic'))
.not.toThrow();
});
test('rejects payment exceeding limit by 1 cent', () => {
expect(() => validatePaymentAmount(1000.01, 'USD', 'basic'))
.toThrow('Amount exceeds limit');
});
test('throws error for negative amount', () => {
expect(() => validatePaymentAmount(-100, 'USD', 'basic'))
.toThrow('Amount must be positive');
});
test('throws error for zero amount', () => {
expect(() => validatePaymentAmount(0, 'USD', 'basic'))
.toThrow('Amount must be positive');
});
test('throws error for invalid currency', () => {
expect(() => validatePaymentAmount(500, 'GBP', 'basic'))
.toThrow('Invalid tier or currency');
});
test('throws error for invalid tier', () => {
expect(() => validatePaymentAmount(500, 'USD', 'enterprise'))
.toThrow('Invalid tier or currency');
});
test('accepts premium tier at higher limit', () => {
expect(() => validatePaymentAmount(10000, 'USD', 'premium'))
.not.toThrow();
});
});✅Time Saved
Integration Tests from API Specs
Prompt:
Generate integration tests for POST /orders endpoint:
@src/api/orders.js
Test cases:
- Successful order creation
- Failed payment scenario
- Empty items array rejection
- Unauthenticated request
Use Supertest + Jest, mock PaymentServiceKey output (snippet):
describe('POST /orders', () => {
beforeEach(() => {
jest.clearAllMocks();
});
test('creates order with valid payment', async () => {
PaymentService.charge.mockResolvedValue({ success: true });
const response = await request(app)
.post('/orders')
.set('Authorization', 'Bearer valid-token')
.send({ items: [{ id: 'prod-1', qty: 2 }] })
.expect(201);
expect(response.body.order).toBeDefined();
expect(Order.create).toHaveBeenCalled();
});
test('marks order as payment_failed when charge fails', async () => {
PaymentService.charge.mockResolvedValue({ success: false });
const response = await request(app)
.post('/orders')
.set('Authorization', 'Bearer valid-token')
.send({ items: [{ id: 'prod-1' }] })
.expect(402);
expect(response.body.error).toBe('Payment failed');
});
test('rejects empty items array', async () => {
const response = await request(app)
.post('/orders')
.set('Authorization', 'Bearer valid-token')
.send({ items: [] })
.expect(400);
expect(response.body.error).toContain('items');
});
test('rejects unauthenticated request', async () => {
await request(app)
.post('/orders')
.send({ items: [{ id: 'prod-1' }] })
.expect(401);
});
});✅Complete Coverage
Best Practices for Test Generation
1. Provide Clear Context
❌ Bad:
"Generate tests for this function"✅ Good:
Generate Jest tests for @src/validator.js covering:
- Happy path
- Boundary conditions (null, empty, max values)
- Error cases
Use AAA pattern (Arrange-Act-Assert).2. Iterate and Refine
The tests don't handle null inputs. Add tests for:
- null
- undefined
- empty string
- invalid Date objects3. Request Specific Patterns
Use test.each for parameterized tests
Group with describe blocks
Add comments for complex setups
Follow patterns from @tests/examples/12.2 Automating Regression Testing
The Regression Challenge
Every new feature risks breaking existing functionality. Traditional approach: add 10 features, add 100 regression tests, soon our suite takes hours and becomes flaky.
Cursor helps break this cycle.
Generating Baseline Tests Before Refactoring
We're refactoring an auth endpoint. We need regression tests to ensure nothing breaks.
Prompt:
Generate regression tests for /login endpoint before refactoring:
@src/api/auth.js
Verify current behavior:
- Successful login returns token + user data
- Failed login returns 401
- JWT token is valid
- Response doesn't leak password hash
Use Supertest + JestCursor generates baseline tests:
describe('POST /login - Baseline Regression', () => {
test('successful login returns token and sanitized user', async () => {
const user = await createTestUser({
email: 'test@example.com',
password: 'SecurePass123!'
});
const response = await request(app)
.post('/login')
.send({
email: 'test@example.com',
password: 'SecurePass123!'
})
.expect(200);
expect(response.body.token).toBeDefined();
const decoded = jwt.verify(
response.body.token,
JWT_SECRET
);
expect(decoded.userId).toBe(user.id);
// Ensure no password leak
expect(response.body.user.passwordHash).toBeUndefined();
});
test('invalid password returns 401 without revealing user existence',
async () => {
await createTestUser({ email: 'test@example.com' });
const response = await request(app)
.post('/login')
.send({
email: 'test@example.com',
password: 'WrongPass!'
})
.expect(401);
expect(response.body.error).toBe('Invalid credentials');
// Doesn't reveal if email exists (prevents enumeration)
}
);
});✅Refactor with Confidence
Self-Healing Tests
UI changes break tests even when functionality is correct. A button ID changes, and 50 tests fail. Make tests resilient:
Review these Playwright tests and make resilient to UI changes:
@tests/e2e/checkout.spec.js
Replace:
- Brittle CSS selectors (#submit-btn)
- Hardcoded waits (waitForTimeout(5000))
With:
- data-testid attributes
- Smart waits (waitForSelector with conditions)Before (brittle):
await page.click('#submit-btn'); // Breaks if ID changes
await page.waitForTimeout(5000); // Arbitrary waitAfter (resilient):
await page.click('[data-testid="submit-button"]'); // Explicit test hook
await page.waitForSelector('[data-testid="confirmation"]', {
state: 'visible',
timeout: 30000
}); // Smart wait12.3 Test Coverage Analysis
Beyond the Percentage
85% coverage doesn't mean 85% tested. A function might execute every line but never verify correctness.
Identify meaningful gaps:
Analyze test coverage for @src/billing/subscriptionManager.js
Current: 85% lines, 70% branches
Identify:
1. Untested code paths
2. Missing edge cases
3. Error handling gaps
For each gap, generate a test caseCursor analyzes and finds:
// Gap #1: No active subscription case never tested
test('throws error when user has no subscription', async () => {
const user = await createTestUser();
// Don't create subscription
await expect(upgradeSubscription(user.id, 'premium'))
.rejects.toThrow('No active subscription');
});
// Gap #2: Upgrading to same plan (no-op case)
test('returns current subscription when upgrading to same plan',
async () => {
const sub = await createSubscription(userId, 'basic');
const result = await upgradeSubscription(userId, 'basic');
expect(result.id).toBe(sub.id);
expect(chargeUpgradeDifference).not.toHaveBeenCalled();
}
);
// Gap #3: Payment failure during upgrade
test('throws PaymentError when upgrade charge fails', async () => {
chargeUpgradeDifference.mockResolvedValue({ success: false });
await expect(upgradeSubscription(userId, 'premium'))
.rejects.toThrow(PaymentError);
});✅Impact
Property-Based Testing
Traditional tests check specific inputs. Property tests verify invariants for any input.
Prompt:
Generate property-based tests using fast-check for:
@src/utils/arrayUtils.js
Functions: sortBy, deduplicate, partition, chunk
Verify properties hold for:
- Empty arrays, single elements, large arrays (1000+)
- Arrays with nulls/undefinedCursor generates:
const fc = require('fast-check');
describe('arrayUtils - Property Tests', () => {
test('sortBy maintains array length', () => {
fc.assert(
fc.property(
fc.array(fc.record({ id: fc.integer() })),
(array) => {
const sorted = sortBy(array, 'id');
expect(sorted.length).toBe(array.length);
}
)
);
});
test('deduplicate removes all duplicates', () => {
fc.assert(
fc.property(
fc.array(fc.integer()),
(array) => {
const deduped = deduplicate(array);
const uniqueSet = new Set(deduped);
// No duplicates remain
expect(uniqueSet.size).toBe(deduped.length);
}
)
);
});
test('chunk then flatten recovers original', () => {
fc.assert(
fc.property(
fc.array(fc.integer()),
fc.integer({ min: 1, max: 100 }),
(array, chunkSize) => {
const chunks = chunk(array, chunkSize);
expect(chunks.flat()).toEqual(array);
}
)
);
});
});ℹ️Why This Matters
12.4 The QA Role Evolution
From Test Writer to Quality Architect
With AI handling test implementation, QA engineers shift focus to higher-leverage activities:
Traditional QA Focus:
- Writing test scripts
- Manual test execution
- Bug reproduction
- Test maintenance
AI-Era QA Focus:
- Test strategy design
- Risk assessment and prioritization
- Edge case discovery
- Quality metrics and insights
- Security and performance testing
- Test architecture and frameworks
Quality Metrics That Matter
Track these to ensure AI assistance improves actual quality:
| Metric | Target | Warning Sign |
|---|---|---|
| Test Generation Time | < 30 min per feature | > 2 hours (AI not helping) |
| Coverage Increase | +10-15% per quarter | Stagnant or declining |
| Flaky Test Rate | < 2% | > 5% (brittle AI tests) |
| Bug Escape Rate | Declining | Rising (false confidence) |
| Test Execution Time | < 10 min for full suite | > 30 min (needs optimization) |
Building a Test Prompt Library
Create reusable test generation patterns:
prompts/testing/
├── unit-tests.md # Standard unit test template
├── integration-api.md # API integration tests
├── e2e-user-flow.md # End-to-end scenarios
├── security-tests.md # Security test cases
├── performance-tests.md # Load and stress tests
└── regression-suite.md # Regression test generationExample template:
# Unit Test Generation Template
## Context
Target file: [specify file path]
Framework: [Jest/Mocha/PyTest]
Existing patterns: [reference similar tests]
## Requirements
- Test all public methods
- Cover happy path + edge cases
- Include boundary conditions
- Test error handling
- Use descriptive test names
- Follow AAA pattern
## Edge Cases to Consider
- Null/undefined inputs
- Empty arrays/objects
- Invalid types
- Boundary values (0, -1, MAX_INT)
- Concurrent access (if applicable)
## Output Format
- Group tests with describe blocks
- Use test.each for similar cases
- Include setup/teardown if needed
- Add comments for complex assertions12.5 Common Pitfalls in AI-Generated Tests
Pitfall 1: Tests That Always Pass
AI sometimes generates tests that don't actually verify behavior:
❌ Bad: Always passes
test('user can login', async () => {
const response = await login('test@example.com', 'password');
expect(response).toBeDefined(); // Too vague
});✅ Good: Actually verifies behavior
test('successful login returns valid JWT token', async () => {
const response = await login('test@example.com', 'password');
expect(response.token).toBeDefined();
expect(response.user.email).toBe('test@example.com');
expect(response.user.passwordHash).toBeUndefined();
// Verify token is valid
const decoded = jwt.verify(response.token, JWT_SECRET);
expect(decoded.userId).toBeDefined();
});⚠️Detection
Pitfall 2: Missing Test Data Cleanup
AI-generated tests may not clean up properly:
❌ Bad: Leaves test data behind
test('creates user', async () => {
await createUser({ email: 'test@example.com' });
// No cleanup
});✅ Good: Cleanup in afterEach
afterEach(async () => {
await User.deleteMany({ email: /test.*@example\.com/ });
});
test('creates user', async () => {
await createUser({ email: 'test@example.com' });
const user = await User.findOne({ email: 'test@example.com' });
expect(user).toBeDefined();
});Pitfall 3: Flaky Tests from Race Conditions
❌ Bad: Race condition
test('processes async job', async () => {
triggerAsyncJob();
const result = await getJobResult(); // May not be ready
expect(result.status).toBe('completed');
});✅ Good: Proper async handling
test('processes async job', async () => {
const jobId = await triggerAsyncJob();
// Wait for completion with timeout
const result = await waitFor(() => getJobResult(jobId), {
timeout: 5000,
interval: 100
});
expect(result.status).toBe('completed');
});Key Takeaways for QA Teams
Test Generation is Transformational
Generate comprehensive test suites in minutes instead of hours, shifting focus from writing to strategy.
Maintain Quality Standards
AI speeds up test creation, but humans must verify tests actually check correctness, not just coverage.
Build Test Infrastructure
Create prompt libraries, test data factories, and reusable patterns that make AI-generated tests consistent and maintainable.
Evolve the Role
QA engineers become quality architects—designing test strategies, discovering edge cases, and ensuring security and performance.
Measure What Matters
Track meaningful metrics (bug escape rate, flaky tests) alongside speed metrics to ensure AI improves actual quality.