Chapter 5
Level 2: Mid-Level Developer
You've shipped twenty features, debugged hundreds of bugs, and can implement most requirements without googling basic syntax. You're no longer learning fundamentals—you're delivering value. The bottleneck isn't understanding how to code; it's the sheer volume of typing, testing, and documentation required to ship production-quality features.
This is where Cursor transforms from teacher to partner. At the mid-level, AI becomes your pair programming buddy: someone who drafts implementations while you focus on architecture, catches edge cases you might miss, and handles the tedious parts of software delivery while you maintain quality control.
The shift is subtle but critical. Juniors use AI to understand. Mid-level developers use AI to accelerate. This chapter shows how to ship features 2-3x faster without sacrificing quality—and sometimes improving it.
Accelerating Feature Delivery Without Compromising Quality
Traditional feature delivery follows a predictable timeline: two days for implementation, one day for tests, half a day for code review feedback, another half day for fixes. Cursor compresses this dramatically by handling the mechanical work while you focus on correctness and architecture.
The rapid prototyping pipeline
Feature development becomes a structured conversation between you and AI:
1Phase 1: Specification (15 minutes)
Instead of diving into implementation, start by writing clear requirements:
# Feature: Password Reset Flow
## Requirements
- User enters email, receives reset link valid for 1 hour
- Reset link contains cryptographically secure token
- Token stored hashed in database (bcrypt, 10 rounds)
- Clicking link shows form to enter new password
- Password must meet complexity requirements
- After reset, invalidate all existing sessions
- Rate limit: 3 reset requests per hour per email
- Send confirmation email after successful reset
## Technical Constraints
- Follow existing email patterns from @services/EmailService
- Use Redis for rate limiting (pattern: @middleware/rateLimiter)
- Transaction boundaries around DB writes
- Audit logging for security events
## Success Criteria
- All tests pass (unit + integration)
- No SQL injection vulnerabilities
- Rate limiting prevents abuse
- Works on mobile (email client compatibility)This takes 15 minutes to write but saves hours of back-and-forth. Clear specifications lead to correct implementations on the first attempt.
2Phase 2: Architecture Planning (10 minutes)
Before generating code, ask for an execution plan:
Review the password reset specification above.
Generate an implementation plan covering:
1. Database schema changes needed
2. API endpoints with request/response formats
3. Service layer responsibilities
4. External dependencies (email, Redis)
5. Testing strategy (unit, integration, security)
6. Migration approach (backward compatibility)
7. Monitoring and observability
Identify potential issues before we start coding.Cursor generates a comprehensive plan. You review it, spot that the token expiry approach won't work across time zones, adjust the requirement, regenerate the plan. This catches architectural issues before any code exists.
3Phase 3: Incremental Implementation (2-3 hours)
Now implement step-by-step, validating each piece:
Implement step 1: Database migration for password_reset_tokens table
Requirements from plan:
- Columns: id, user_id, token_hash, expires_at, used_at, created_at
- Index on token_hash for fast lookups
- Foreign key to users table with CASCADE on delete
- Add migration rollback logic
Generate both up and down migrations following patterns in @db/migrations/Cursor generates the migration. You review it, test it locally (both up and down), then commit before moving to the next step.
Implement step 2: Service layer for token generation
Requirements:
- Generate 32-byte random token (crypto.randomBytes)
- Hash with bcrypt before storing
- Set expiry to 1 hour from now (UTC)
- Return plain token (for email) and hash (for DB)
- Include comprehensive unit tests
Follow patterns from @services/AuthServiceThis incremental approach means you're never reviewing 500 lines of code at once. Each step is small, testable, and validated before proceeding.
4Phase 4: Integration and Refinement (1 hour)
Once individual pieces work, integrate them:
Now connect the pieces:
- Wire password reset endpoints to service layer
- Add rate limiting middleware
- Integrate email sending
- Add audit logging for security events
- Update API documentation
Generate integration tests covering the full flow:
1. Request reset → receive email
2. Click link → show reset form
3. Submit new password → invalidate sessions
4. Try to reuse token → rejected
5. Rate limiting → blocks after 3 attemptsThe result: a complete, tested feature implemented in 4-5 hours instead of 2-3 days. The time savings come from AI handling boilerplate, tests, and documentation while you focus on architecture and correctness.
Context-aware development patterns
The key to quality output is providing the right context:
Implement user profile update endpoint following our patterns:
Context:
@api/users.js (existing user endpoints)
@middleware/auth.js (authentication)
@middleware/validation.js (request validation)
@services/UserService.js (business logic layer)
Requirements:
- PATCH /api/users/:id/profile
- Allow updates: name, bio, avatar_url
- Validate: name 3-50 chars, bio max 500 chars, avatar_url valid URL
- Authorization: users can only update own profile (admins can update any)
- Optimistic locking (use version field to prevent race conditions)
- Audit log all profile changes
Return 200 with updated profile or appropriate error (400/401/403/409)The @ references tell Cursor exactly how this code should integrate. It follows existing patterns, uses the same validation library, maintains architectural consistency. The result feels native to your codebase, not bolted on.
✅Team Impact
Refactoring Legacy Code Safely
Mid-level developers inherit messy code regularly. A 300-line function that handles checkout, payment, inventory, and email notifications—all in one place. Traditional refactoring is risky: change too much and break something; change too little and accomplish nothing. Cursor makes aggressive refactoring safe through structured, test-backed approaches.
The safe refactoring protocol
Step 1: Understand before changing
Analyze this legacy function for refactoring opportunities:
@services/OrderService.js::processCheckout (lines 45-347)
Provide:
1. What this function does (high-level flow)
2. Business logic vs technical concerns (which is which?)
3. Dependencies and side effects
4. Code smells (duplication, high complexity, hidden coupling)
5. Risks if we refactor this
Don't suggest changes yet—just analyze.Cursor identifies that the function handles:
- Cart validation
- Inventory checks
- Payment processing
- Order creation
- Email notifications
- Audit logging
All tangled together with 8 levels of nesting and duplicated error handling.
Step 2: Generate characterization tests
Before touching the code, capture its current behavior:
Generate comprehensive tests for processCheckout that verify
current behavior exactly (even if buggy):
Cover:
- Successful checkout (complete flow)
- Inventory insufficient (various scenarios)
- Payment failures (declined, timeout, error)
- Invalid cart states
- Concurrent checkout attempts
- Email delivery failures
Use mocks for external dependencies (Stripe, SendGrid, database).
Generate at least 20 test cases capturing all code paths.These tests don't verify the code is correct—they verify that refactoring doesn't change behavior. If a test fails after refactoring, you introduced a regression.
Step 3: Incremental extraction
Now refactor in small, safe steps:
Refactor processCheckout - Step 1: Extract inventory checking
Extract inventory validation logic into:
- Function: validateInventoryAvailability(cartItems)
- Returns: { available: boolean, insufficientItems: [] }
- Include: current inventory check logic (don't improve it yet)
Update processCheckout to call the new function.
Run all tests - they should still pass.Commit this change. Tests pass. The function is slightly better but behavior is unchanged.
Refactor processCheckout - Step 2: Extract payment processing
Extract payment logic into:
- Function: processPayment(orderTotal, paymentMethod)
- Returns: { success: boolean, transactionId, error }
- Handle: Stripe API calls, retries, error mapping
Update processCheckout to use this.
Run all tests.Continue incrementally: extract email notifications, audit logging, order creation. After 6-8 steps, the original function is now:
async function processCheckout(cart, paymentMethod, user) {
// Validation
validateCart(cart);
const inventoryCheck = await validateInventoryAvailability(cart.items);
if (!inventoryCheck.available) {
throw new InsufficientInventoryError(inventoryCheck.insufficientItems);
}
// Process payment
const payment = await processPayment(cart.total, paymentMethod);
if (!payment.success) {
throw new PaymentFailedError(payment.error);
}
// Create order and notify
const order = await createOrder(cart, payment.transactionId, user);
await sendOrderConfirmation(order, user.email);
await logAuditEvent('order_created', order.id, user.id);
return order;
}Clean, readable, testable. Each extracted function can now be tested and improved independently.
Step 4: Add new tests for extracted functions
Now that functions are separated, add comprehensive tests for each:
For validateInventoryAvailability:
- Test edge cases (zero quantity, negative, fractional)
- Test concurrent inventory checks
- Test race conditions (inventory changes between check and order)
For processPayment:
- Test retry logic (network failures)
- Test idempotency (duplicate charge prevention)
- Test webhook handling (async payment confirmation)These new tests catch issues the characterization tests missed because the original code never handled these cases correctly.
The documentation boost
Refactored code needs updated documentation:
Generate comprehensive documentation for the refactored checkout system:
1. Update function-level JSDoc for each extracted function
2. Create architecture diagram showing the flow
3. Write README section explaining:
- How checkout works (sequence diagram)
- Error handling strategy
- Retry and idempotency guarantees
- Testing approach
4. Update API documentation for the checkout endpointDocumentation becomes a 10-minute task instead of an hour-long chore.
Integrating Cursor into PR Workflows
Pull requests are where team collaboration happens—and where AI can either accelerate or bottleneck your process. The key is using AI to improve PR quality without replacing human judgment.
Pre-PR cleanup with AI
Before opening a PR, run through quality checks:
Review my changes for this PR and suggest improvements:
Files changed: [git diff --name-only]
Check for:
1. Unused imports or variables
2. Missing error handling
3. Inconsistent formatting
4. Incomplete JSDoc comments
5. Magic numbers that should be constants
6. Overly complex expressions that could be simplified
For each issue: show the fix, don't just describe it.Cursor finds three unused imports, two missing error handlers, and five magic numbers. You fix them before the PR goes out. Reviewer focuses on architecture, not style.
AI-enhanced code review
Cursor can act as a first-pass reviewer:
Review this PR as a senior engineer would:
@pull-request/diff
Focus on:
- Security vulnerabilities (injection, XSS, auth bypass)
- Performance issues (N+1 queries, unnecessary loops)
- Error handling gaps
- Testing coverage (what's missing?)
- Edge cases not considered
Be specific: cite line numbers, explain the issue, suggest fixes.Cursor flags:
- Line 45: SQL query vulnerable to injection (use parameterized query)
- Line 89: No handling for null user (will crash if user deleted)
- Line 112: Synchronous file read in async function (blocks event loop)
- Missing tests for the failure path when API times out
You fix these before requesting human review. The human reviewer now focuses on business logic and architecture instead of finding basic issues.
Automated PR summaries
Large PRs are hard to review. Generate summaries:
Summarize this PR for reviewers:
@pull-request/diff
Generate:
1. One-sentence summary of what changed
2. List of components modified (3-5 bullet points)
3. Testing approach and coverage
4. Top 3 areas needing careful review
5. Potential risks or breaking changes
6. Suggested review order (which files to read first)
Keep it concise—reviewers are busy.Cursor generates:
PR Summary: Add Two-Factor Authentication
What: Implements TOTP-based 2FA for user accounts
Components Changed:
- Auth service: TOTP generation and verification
- User model: Added secret key storage (encrypted)
- API: New endpoints for 2FA setup and verification
- Frontend: Setup flow and login with 2FA code
Testing:
- 12 new unit tests (95% coverage)
- 4 integration tests covering full flows
- Manual testing on iOS and Android
Review Focus:
- Secret key encryption (AuthService.js:45-67)
- Rate limiting on verification attempts (middleware/2fa.js:23)
- Backup codes generation (security critical)
Risks:
- Users without 2FA set up can still log in (backward compatible)
- Admin override procedure not implemented yet (follow-up ticket)
Review Order:
- Start with tests to understand requirements
- Review AuthService.js (core logic)
- Review API endpoints
- Review frontend integration
This turns a 30-minute "figure out what this PR does" into a 2-minute overview, letting reviewers focus on substantive feedback.
CI/CD integration
Embed AI checks into your pipeline:
# .github/workflows/ai-review.yml
name: AI Code Review
on: [pull_request]
jobs:
ai-review:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: AI Security Scan
run: |
cursor-cli review \
--prompt "Scan for security vulnerabilities: \
SQL injection, XSS, auth bypass, secrets exposure" \
--pr ${{ github.event.number }} \
--post-comment
- name: AI Performance Check
run: |
cursor-cli analyze-performance \
--pr ${{ github.event.number }} \
--threshold "100ms p95" \
--post-commentAI posts review comments automatically. Humans review them and the broader context. This catches low-hanging fruit before human reviewers see the PR.
⚠️The human approval gate
Critical: AI should never auto-merge. It identifies issues and suggests improvements, but humans make merge decisions. Configure branch protection:
# .github/branch-protection.yml
required_reviews: 1
dismiss_stale_reviews: true
require_code_owner_reviews: true # Domain experts approve changes
required_status_checks:
- ai-security-scan # Must pass
- ai-performance-check # Must pass
- unit-tests # Must pass
- integration-tests # Must passAI reviews augment human reviews—they don't replace them.
Maintaining Code Quality at Scale
Shipping features 3x faster is meaningless if quality degrades. The challenge for mid-level developers: how to maintain consistency when AI generates significant portions of the codebase.
The style drift problem
Without guardrails, AI suggestions drift from team conventions:
- Old JavaScript patterns creep into TypeScript code
- Inconsistent error handling (some functions throw, others return errors)
- Mixed naming conventions (camelCase vs snake_case)
- Different developers prompt differently, creating fragmented patterns
Solution 1: Project-specific `.cursorrules`
Define your conventions explicitly:
# .cursorrules for API Backend
## Technology Stack
- TypeScript 5.0 with strict mode
- Express.js for HTTP
- PostgreSQL with TypeORM
- Jest for testing
## Code Standards
### Naming Conventions
- Functions and variables: camelCase
- Classes and interfaces: PascalCase
- Constants: UPPER_SNAKE_CASE
- Database tables: snake_case
### Error Handling
- Use custom error classes (UserNotFoundError, ValidationError)
- Never throw generic Error
- Always include error codes for API responses
- Log errors with structured context (requestId, userId)
### API Response Format
All endpoints return:
```json
{
"success": boolean,
"data": object | null,
"error": { "code": string, "message": string } | null
}
```
### Testing Requirements
- Unit test coverage >80%
- Integration tests for all API endpoints
- Mock external services (Stripe, SendGrid)
- Test files colocated with source (same directory)
### Database Patterns
- Always use transactions for multi-step writes
- Parameterized queries only (prevent SQL injection)
- Include database indexes in migration files
- Soft delete by default (deleted_at column)
### Async/Await
- Always use async/await (never .then() chains)
- Wrap in try-catch for error handling
- Don't mix promises and async/await
### TypeScript
- Strict null checks enabled
- No 'any' type (use 'unknown' if type truly unknown)
- Define interfaces for all API request/response types
- Export types alongside implementations
## Security Requirements
- Rate limit all write endpoints (100 req/min per user)
- Validate all user input with Zod schemas
- Sanitize HTML content (use DOMPurify)
- Hash passwords with bcrypt (12 rounds)
- Use JWT with RS256 (not HS256)
- Set secure, httpOnly cookies
- Implement CSRF protection
## What to Never Do
- console.log in production code (use winston logger)
- Synchronous file operations
- SELECT * queries (list specific columns)
- Hardcoded secrets or configuration
- Catch errors without logging themWith this in place, every Cursor interaction follows these standards automatically. You don't repeat them in every prompt.
Solution 2: Pre-commit hooks + formatting
Enforce standards programmatically:
// .husky/pre-commit
#!/bin/sh
# Format code
npx prettier --write .
# Lint
npx eslint . --fix
# Type check
npx tsc --noEmit
# Run tests
npm test
# Check for common mistakes
node scripts/check-standards.jsThe check-standards.js script catches AI-specific issues:
// scripts/check-standards.js
const fs = require('fs');
const glob = require('glob');
const errors = [];
// Check 1: No console.log in source files
glob.sync('src/**/*.ts').forEach(file => {
const content = fs.readFileSync(file, 'utf8');
if (content.includes('console.log')) {
errors.push(`${file}: contains console.log (use logger instead)`);
}
});
// Check 2: All API routes have rate limiting
glob.sync('src/api/**/*.ts').forEach(file => {
const content = fs.readFileSync(file, 'utf8');
if (content.includes('router.post') && !content.includes('rateLimiter')) {
errors.push(`${file}: POST route missing rate limiting`);
}
});
// Check 3: Database queries use parameters
glob.sync('src/**/*.ts').forEach(file => {
const content = fs.readFileSync(file, 'utf8');
const hasStringInterpolation = /query\(`[^`]*\$\{/.test(content);
if (hasStringInterpolation) {
errors.push(`${file}: SQL query uses string interpolation (injection risk)`);
}
});
if (errors.length > 0) {
console.error('❌ Standards violations found:\n');
errors.forEach(err => console.error(` - ${err}`));
process.exit(1);
}This catches common AI mistakes before code reaches PR review.
Solution 3: Regular architectural reviews
Monthly, audit AI-generated code for architectural drift:
Analyze changes from the last month:
@git-log/last-30-days
Identify:
1. Repeated patterns that should be abstracted
2. Architectural inconsistencies (some code uses pattern A, others pattern B)
3. Technical debt accumulation
4. Areas where AI suggestions diverge from our standards
For each issue, suggest a refactoring approach.This prevents gradual decay—keeping the codebase coherent even as AI contributes heavily.
Prompt Recipes for Mid-Level Tasks
Recipe 1: Feature Implementation with Testing
Implement [feature name] following our architecture:
Context:
@existing-similar-feature (reference implementation)
@api-layer (for endpoint patterns)
@service-layer (for business logic patterns)
@data-layer (for database patterns)
Requirements:
[List specific requirements]
Architecture:
- Controller handles HTTP (validation, response formatting)
- Service contains business logic (no HTTP concepts)
- Repository handles database (no business logic)
Generate:
1. API endpoint with request/response types
2. Service layer with business logic
3. Repository layer with database queries
4. Comprehensive tests (unit + integration)
5. API documentation (OpenAPI format)
Follow patterns from @.cursorrulesRecipe 2: Refactor for Clean Code
Refactor this code for maintainability:
@[filename]
Apply:
- Single Responsibility Principle (extract functions doing >1 thing)
- Reduce cyclomatic complexity (break up complex conditionals)
- Improve naming (make intentions obvious)
- Extract magic numbers to named constants
- Remove duplication (DRY principle)
Constraints:
- Preserve existing functionality (no behavior changes)
- Keep existing tests passing
- Add comments only for non-obvious business logic
- Show before/after comparison for major changes
Generate characterization tests first if none exist.Recipe 3: Performance Optimization
Optimize this code for performance:
@[filename]
Analysis needed:
1. Identify bottlenecks (loops, queries, API calls)
2. Measure current performance (add timing logs)
3. Propose optimizations with expected impact
Optimization techniques to consider:
- Database query optimization (indexes, reduce N+1)
- Caching (Redis, in-memory)
- Async/parallel processing
- Reduce unnecessary operations
- Algorithm optimization (time complexity)
For each optimization:
- Show before/after code
- Estimate performance improvement
- Explain tradeoffs (complexity, memory, maintainability)
Include benchmark tests to verify improvements.Recipe 4: Add Comprehensive Error Handling
Improve error handling in this code:
@[filename]
Requirements:
- Wrap external calls (API, DB) in try-catch
- Use custom error classes from @errors/
- Include structured logging (requestId, userId, context)
- Return user-friendly messages (don't expose internals)
- Handle specific error types differently:
- Network errors → retry with backoff
- Validation errors → 400 with details
- Auth errors → 401/403
- Not found → 404
- Unexpected → 500 (log full details)
Add tests for each error scenario.Recipe 5: Generate PR Summary
Generate a comprehensive PR description:
@pull-request/diff
Include:
1. **Summary**: One-sentence description of changes
2. **Motivation**: Why is this change needed?
3. **Changes**: Bullet list of what was modified
4. **Testing**: How was this tested?
5. **Screenshots**: (if UI changes)
6. **Migration**: Any database/config changes needed?
7. **Breaking Changes**: (if any)
8. **Review Focus**: Top 3 areas needing careful review
9. **Rollback Plan**: How to undo if issues arise
Format as markdown for easy copy-paste.Real-World Scenario: Speeding Up a Feature Sprint
Context: Your team has a two-week sprint with four medium-complexity features. Historically, this would be tight—maybe 3 features done well or 4 done rushed.
Day 1: Feature 1 - Search Filtering
Traditional approach: 1.5 days
With Cursor: 4 hours
- 9:00 AM: Write spec (30 min)
- 9:30 AM: Get AI implementation plan, review, adjust (20 min)
- 9:50 AM: Generate API endpoint + service + tests (30 min)
- 10:20 AM: Review code, fix issues, run tests (45 min)
- 11:05 AM: Generate documentation (10 min)
- 11:15 AM: Manual testing (30 min)
- 11:45 AM: Create PR with AI summary (15 min)
- 12:00 PM: Code review (30 min async)
- 1:00 PM: Merge
Total: 4 hours (saved 8 hours)
Day 2-3: Feature 2 - Email Notifications
Traditional approach: 2 days
With Cursor: 6 hours
More complex due to email templating, scheduling, and retry logic. Used Cursor for boilerplate but spent more time on testing edge cases (network failures, template rendering).
Day 4-5: Feature 3 - Analytics Dashboard
Traditional approach: 2 days
With Cursor: 7 hours
Generated chart components, data aggregation, and API endpoints. Spent extra time optimizing queries identified during load testing.
Day 6-7: Feature 4 - Audit Logging
Traditional approach: 1.5 days
With Cursor: 5 hours
Cross-cutting concern touching many endpoints. Used Cursor to systematically add logging to all API routes following a consistent pattern.
Sprint result:
- All 4 features completed by Day 7
- High test coverage (88% average)
- Zero production bugs in first month
- Days 8-14 used for: refactoring technical debt, improving documentation, adding observability
Key insight: AI doesn't just make you faster—it gives you time back for quality improvements you'd normally skip under deadline pressure.
Key Takeaways
ℹ️Use Cursor for acceleration, not replacement
ℹ️Incremental implementation beats big-bang generation
ℹ️Context is everything
@ symbol and .cursorrules transform generic AI into a tool that understands your codebase, follows your conventions, and produces native-feeling code.ℹ️Pre-PR cleanup saves review time
ℹ️Quality guardrails must scale with velocity
ℹ️Documentation becomes effortless
The next chapter explores how senior engineers use Cursor as a force multiplier—orchestrating large-scale refactors, creating reusable prompt libraries, and ensuring AI-accelerated development doesn't compromise system resilience. The principles remain the same: AI handles execution, humans handle judgment. The scale just increases.