Chapter 14
Cursor in the SDLC
Software Development Life Cycle processes were designed for humans writing code manually. AI doesn't fit neatly—it transforms them. Understanding where AI adds value determines whether we amplify productivity or create new bottlenecks.
ℹ️Chapter Overview
14.1 Where AI Fits in Each SDLC Stage
Planning & Requirements
With Cursor:
- Automated effort estimation from requirements
- Risk identification in proposed changes
- Requirements validation against existing architecture
- Technical feasibility analysis
Example:
Analyze this user story for complexity:
@specs/shopping-cart-checkout.md
Estimate:
- Story points (1-13 scale)
- Implementation risks
- Dependencies and blockers
- Task breakdownCursor output:
Complexity: 8 Story Points
Task Breakdown:
- Backend API (3 points): /checkout endpoint, Stripe integration
- Database (2 points): order_transactions table, inventory_locks
- Frontend (2 points): Checkout form, payment element
- Testing (1 point): Unit + integration + E2E
Risks:
HIGH: Race conditions in inventory locking
Mitigation: Database-level locks with timeout
MEDIUM: Payment failures during order creation
Mitigation: Saga pattern for rollback
LOW: Checkout form validation complexity
Mitigation: Use existing validation library
Dependencies:
- Stripe API keys (production environment)
- Inventory service deployment (v2.3+)
- Email service for order confirmations
Blockers:
None identified. Inventory service upgrade scheduled for Sprint 12.
✅Time Saved
Design & Architecture
With Cursor:
- Generate system diagrams
- Propose API contracts
- Design database schemas
- Suggest design patterns
Example:
Design architecture for multi-tenant SaaS platform:
Requirements:
- Tenant data isolation
- Shared infrastructure (cost efficiency)
- Per-tenant customization
- 99.9% uptime SLA
Provide:
- Database schema approach
- API authentication strategy
- Deployment architecture
- Scaling considerationsCursor generates comprehensive architectural proposals including trade-offs, enabling informed design decisions before writing code.
Implementation
This is Cursor's strongest stage: code generation, intelligent autocomplete, refactoring, and bug fixing.
ℹ️Key Practice
Critical Workflows:
- Feature scaffolding with existing patterns
- Boilerplate generation (CRUD, API routes, models)
- Test-driven development (write tests, generate implementation)
- Refactoring with behavior preservation
Testing & QA
Covered in Chapter 12. Key capabilities:
- Automated test generation (unit, integration, E2E)
- Coverage analysis and gap identification
- Regression suite maintenance
- Flaky test diagnosis and fixing
Deployment & Release
With Cursor:
Generate CI/CD configs, deployment scripts, rollback procedures, and release notes.
Example:
Generate Kubernetes deployment config:
Target: AWS EKS
Requirements:
- Blue-green deployment
- Health checks (readiness + liveness)
- Auto-scaling (2-10 pods, CPU-based)
- Gradual rollout (10% → 50% → 100%)
- Resource limits (memory: 512Mi-1Gi, CPU: 500m-1000m)Cursor generates production-ready YAML configurations in minutes, including proper health checks, resource limits, and progressive rollout strategies.
Maintenance & Monitoring
With Cursor:
Bug triage, root cause analysis, hotfix generation, and log analysis.
Example incident response:
Analyze this production error:
[ERROR] TypeError: Cannot read property 'id' of undefined
at UserService.getProfile (line 45)
at processRequest (line 112)
Occurrences: 1,247/hour
Affected users: 89
Started: 2025-10-15 14:23 UTC
Context: @src/services/UserService.js
Provide:
1. Root cause
2. Immediate hotfix
3. Long-term fix
4. Prevention strategyCursor identifies the issue (missing null check after database query), generates a hotfix, and suggests architectural improvements to prevent similar issues.
14.2 Example: AI-First Sprint from Grooming to Deployment
Let's walk through a complete two-week sprint leveraging Cursor at every stage.
Sprint Setup
- Team: 6 developers, 1 QA, 1 PM
- Sprint Goal: Launch price drop notifications feature
- Sprint Length: 2 weeks
Day -1: Backlog Grooming
PM shares rough concept: "Users want notifications when favorited products go on sale."
Prompt:
Expand into detailed user story:
"Users want notifications when products they favorited go on sale"
Context:
@src/models/User.js
@src/services/NotificationService.js
@docs/architecture.md
Generate INVEST story with acceptance criteriaCursor output:
User Story: Price Drop Notifications
As a user who has favorited products
I want email notifications when they go on sale
So that I can purchase at the best price
Acceptance Criteria:
- Users can enable/disable in notification settings
- Daily price checks for all favorited items
- Email sent when price drops ≥10% or ≥$5
- Maximum 1 email per product per week
- Respects user's notification preferences
Technical Requirements:
- Database: Add price_alerts table
- Background job: Daily price check (2 AM UTC)
- Email template: price-drop notification
- API: Add /notifications/price-alerts endpoints
- Testing: Unit + integration + E2E
Estimated: 5 Story Points
Team reviews, refines, and commits to sprint.
Days 1-7: Development Sprint
Day 1: Database & Models
Implement price drop notification system:
@specs/price-drop-notifications.md
Start with migrations and models:
- price_alerts table
- associations with User and Product
- validation rulesResult: Cursor generates migrations, models, and tests. Time: 2 hours instead of 6.
Day 2-3: Background Job
Create daily price check job:
@specs/price-drop-notifications.md
@src/models/PriceAlert.js
Requirements:
- Check all active price alerts
- Compare current price with price when favorited
- Trigger notification if drop ≥10% or ≥$5
- Rate limit: max 1 email per product per week
- Handle errors gracefully (log, don't crash)
- Add monitoring metricsDay 4-5: API Endpoints
Create API endpoints for price alerts:
- GET /api/price-alerts - List user's active alerts
- POST /api/price-alerts - Create alert for product
- DELETE /api/price-alerts/:id - Remove alert
- PUT /api/settings/notifications - Enable/disable feature
Include: Authentication middleware, Input validation (Zod schemas), Rate limiting, OpenAPI documentation, Unit tests
Day 6-7: Frontend Integration
Add price alert UI to product pages:
@components/ProductCard.jsx
@components/NotificationSettings.jsx
Features:
- "Notify me" button on product cards
- Shows if alert already set
- Settings toggle for price alerts
- Visual feedback on save/error
Use existing design system componentsDay 14: Retrospective
Sprint Metrics:
- Velocity: 42 points completed (40 planned)
- AI Contribution: 38% of code
- Cycle Time: 2.1 days/story (vs 3.5 baseline)
- Defects: 3 bugs found (vs 7 baseline)
- Test Coverage: 89% (vs 75% baseline)
Team Insights:
- ✅ Cursor dramatically sped up boilerplate and tests
- ✅ Spec-to-code workflow reduced interpretation gaps
- ⚠️ Junior dev over-relied; produced inefficient code
- 🎯 Action: Build prompt library for next sprint
14.3 Governance & Compliance
⚠️Critical
Essential Governance Framework
1. Policy Definition
# AI-Assisted Development Policy
## Approved Use Cases
✅ Code generation for well-defined requirements
✅ Test generation and coverage analysis
✅ Refactoring and documentation
✅ Boilerplate and scaffolding
## Restricted (Requires Senior Review)
⚠️ Security-sensitive code (auth, crypto, permissions)
⚠️ Data processing with PII
⚠️ Financial calculations
⚠️ Database migrations on production schemas
## Prohibited
❌ Generating code from unlicensed sources
❌ Processing customer data without consent
❌ Bypassing code review
❌ Auto-deploying AI code to production2. Audit and Traceability
Git commit hook:
#!/bin/bash
# .git/hooks/prepare-commit-msg
# Append AI metadata to commits
cat >> "$1" <<EOF
---
AI Contribution:
- Tool: Cursor v0.42
- Model: Claude Sonnet 4.5
- Prompt ID: ${CURSOR_PROMPT_ID}
- Reviewed by: ${REVIEWER}
- Generated: $(date -u +"%Y-%m-%d %H:%M:%S UTC")
EOF3. Security Controls
Enhanced scanning for AI code:
# .github/workflows/ai-security-scan.yml
name: AI Code Security Scan
on: [pull_request]
jobs:
scan:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Identify AI-generated code
run: |
git diff --name-only | xargs grep -l "ai-generated" > ai-files.txt
- name: Run enhanced security scan
uses: github/codeql-action/analyze@v2
with:
queries: security-extended
- name: Verify senior review for sensitive code
run: node scripts/verify-review-requirements.js
- name: Check for secrets
uses: trufflesecurity/trufflehog@main4. Compliance for Regulated Industries
Healthcare (HIPAA):
- ☐ Privacy mode enabled (no code retention by AI)
- ☐ No PHI in prompts or context
- ☐ Privacy officer review completed
- ☐ Audit trail maintained
- ☐ Access controls documented
- ☐ Encryption verified
Financial (SOX):
Change Control Process:
- Developer generates code with Cursor
- Peer developer reviews logic
- Senior engineer reviews compliance impact
- QA validates with test data
- Finance team approves (if financial impact)
- Compliance officer signs off
- Deploy with documented rollback plan
Governance Checklist
Week 1: Foundation
- ☐ Create AI usage policy document
- ☐ Define approved/restricted/prohibited use cases
- ☐ Set up PR template with AI involvement section
- ☐ Enable security scanning in CI
- ☐ Add mandatory test coverage checks
Month 1: Implementation
- ☐ Create prompt library with owners
- ☐ Enable audit logging (prompt inputs/outputs)
- ☐ Define reviewer roles per artifact type
- ☐ Implement PII redaction rules
- ☐ Set up governance dashboard
Quarter 1: Maturation
- ☐ Train team on AI policy and best practices
- ☐ Run monthly prompt audits
- ☐ Analyze AI vs human defect rates
- ☐ Refine policies based on learnings
- ☐ Establish quarterly risk reviews
Key Takeaways
AI Transforms Every Stage
From planning to maintenance, AI adds value across the SDLC—but implementation and testing see the biggest gains.
Governance is Non-Negotiable
Speed without safety creates liability. Establish policies, audit trails, and compliance checks from day one.
Human Oversight Scales Differently
Automate junior tasks heavily, keep senior decisions human-led. The review bottleneck shifts but doesn't disappear.
Measure What Matters
Track velocity AND quality. Fast but broken is worse than slow but stable.
Start Small, Scale Intentionally
Pilot with one team, measure impact, establish governance, then expand. Rushing adoption creates chaos.