Chapter 6
Level 3: Senior Developer
The director of engineering calls you into a meeting. "We need to migrate fifteen microservices from Node 14 to Node 20. Security audit flagged it as critical. How long will it take?"
Traditional answer: "Three months minimum. Each service needs dependency updates, breaking changes fixed, tests validated, deployment pipelines updated. If we rush, we'll break production."
With Cursor: "Two weeks for the migration with proper testing. I'll orchestrate it."
You're not claiming you can type 6x faster. You're recognizing that most migration work is mechanical—update package.json, adjust deprecated APIs, regenerate lock files, update CI configs. AI handles mechanical work flawlessly. Your role is orchestration: defining the strategy, establishing quality gates, catching the edge cases AI misses, and ensuring nothing breaks.
This is the senior developer's superpower with AI: turning individual productivity gains into organizational capability. You become a force multiplier—one senior with AI can produce output equivalent to a small team while maintaining higher consistency.
Leading Delivery Through Strategic Orchestration
At the senior level, the shift is from implementation to coordination. You're no longer measured by how much code you write but by how effectively you unblock systems, teams, and architecture. AI amplifies this by handling execution while you focus on strategy.
System-wide coordination patterns
Large-scale changes require careful orchestration. Here's how AI changes the approach:
Traditional monolith-to-microservices migration:
- Month 1: Design service boundaries, document APIs
- Month 2-4: Extract services one by one
- Month 5: Integration testing
- Month 6: Gradual rollout
- Bottleneck: Actually writing the migration code
AI-assisted migration:
- Week 1: Design service boundaries, define contracts
- Week 2: AI generates service scaffolding for all services
- Week 3: Review generated services, adjust business logic
- Week 4: Integration testing and deployment
You didn't write 10,000 lines of boilerplate. You designed the architecture, specified requirements, and validated correctness. AI handled execution.
Example: Framework upgrade across 12 microservices
You need to upgrade React 17 → React 18 across twelve frontend microservices. Traditional approach means visiting each repo individually, fixing breaking changes, testing, debugging subtle issues. With AI orchestration:
Analyze upgrade impact across all frontend services:
Services: @frontend/user-dashboard, @frontend/admin-panel,
@frontend/checkout, @frontend/analytics, [... 8 more]
For React 17 → 18 upgrade, identify:
1. Breaking changes affecting each service
2. Dependencies needing updates
3. Code patterns requiring changes
4. Testing strategy per service
5. Rollout sequence (dependencies between services)
Generate a comprehensive migration plan.Cursor analyzes all twelve services and generates a structured plan:
React 18 Migration Plan
Impact Summary
- 8 services: Low impact (no breaking changes)
- 3 services: Medium impact (hydration changes needed)
- 1 service: High impact (uses deprecated ReactDOM.render)
Migration Sequence
- Low-impact services (parallel): user-profile, settings, notifications, help-docs, search, analytics, reports, feedback
- Medium-impact services (after low): user-dashboard, checkout, admin-panel
- High-impact service (last): legacy-reports
Breaking Changes by Service
legacy-reports (HIGH IMPACT)
- Line 45: ReactDOM.render → createRoot
- Line 89: Deprecated componentWillMount → useEffect
- 23 warnings about strict mode
user-dashboard (MEDIUM IMPACT)
- Hydration mismatches in SSR (12 instances)
- useEffect infinite loop (pages/Dashboard.jsx:67)
[... continues for each service]
Dependencies to Update
- react: 17.0.2 → 18.2.0
- react-dom: 17.0.2 → 18.2.0
- @types/react: 17.0.43 → 18.2.15
- @testing-library/react: 12.1.4 → 14.0.0
Migration Steps
- Update dependencies (all services)
- Run codemod for API changes
- Fix type errors
- Update tests
- Validate in staging
- Canary deploy (10% traffic)
- Full rollout
Now execute systematically:
Generate a codemod for React 18 migration:
Transformations needed:
1. ReactDOM.render(app, container) →
const root = ReactDOM.createRoot(container); root.render(app)
2. ReactDOM.hydrate(app, container) →
const root = ReactDOM.hydrateRoot(container, app)
3. Remove componentWillMount, replace with useEffect
4. Update Suspense fallback types
5. Fix StrictMode warnings
Output: JSCodeshift codemod following patterns in @codemods/
Include: before/after examples, testing instructionsCursor generates a codemod you test on one service, validate, then apply systematically across all twelve. What would have taken six weeks of manual work happens in one week with higher consistency—every service follows identical patterns.
✅The new senior role
Team force multiplication
Studies show senior developers with AI achieve 30-50% productivity gains individually. But the real multiplier comes from enabling others:
- Create prompt libraries that junior/mid developers can use
- Establish
.cursorrulesthat enforce architectural patterns - Build codemods that solve entire categories of problems
- Generate comprehensive documentation that onboards new team members
One senior who creates a good prompt library can 2x the productivity of ten developers. That's a 20x force multiplier, not a 1.5x personal productivity gain.
Using AI for System-Wide Impact
Large-scale refactors and upgrades—traditionally slow and error-prone—become tractable when you combine AI execution with senior engineering judgment.
High-leverage use cases
1Dependency upgrades at scale
Analyze dependency risk across all services:
Repos: [list 20+ microservices]
Identify:
- Outdated dependencies (>6 months behind latest)
- Security vulnerabilities (npm audit / Snyk)
- Breaking changes in major version updates
- Dependencies blocking other upgrades
- Circular dependency issues
Prioritize by: security risk > breaking change complexity > effort
Generate upgrade plan with estimated effort per service.Cursor produces a prioritized list:
HIGH PRIORITY (Security)
1. axios 0.21.1 → 1.6.2 (CVE-2023-45857)
Affected: 12 services
Effort: Low (backward compatible)
2. jsonwebtoken 8.5.1 → 9.0.2 (CVE-2022-23529)
Affected: 8 services
Effort: Medium (signature verification changes)
MEDIUM PRIORITY (Major versions)
3. TypeScript 4.9 → 5.3
Affected: All services
Effort: High (enum changes, type narrowing)
4. Jest 27 → 29
Affected: All services
Effort: Medium (snapshot updates needed)
Now execute upgrades systematically:
Generate migration guide for jsonwebtoken 8.5.1 → 9.0.2:
Services affected: @auth-service, @api-gateway, @admin-api,
@user-service, @payment-service, @notification-service,
@analytics-service, @reporting-service
For each service:
1. Identify all JWT sign/verify calls
2. Show required code changes
3. Generate migration tests
4. Update CI/CD to use new version
5. Document rollback procedure
Create a checklist PR template for each service.2Framework migrations
Migrating from one framework to another across multiple services:
Plan migration: Express.js → Fastify for API services
Services: @api-gateway, @user-api, @order-api, @payment-api
Analysis needed:
1. Express middleware → Fastify plugins (equivalents?)
2. Request/response API differences
3. Error handling pattern changes
4. Testing approach differences
5. Performance benchmarks (baseline vs migrated)
Generate:
- Side-by-side comparison of common patterns
- Migration checklist per service
- Adapter layer for gradual migration
- Integration test suiteCursor generates a comprehensive migration guide including an adapter pattern that lets you migrate incrementally:
// Adapter: Run Express middleware in Fastify
function expressToFastify(expressMiddleware) {
return async (request, reply) => {
// Translate Fastify req/res to Express format
const req = createExpressRequest(request);
const res = createExpressResponse(reply);
await new Promise((resolve, reject) => {
expressMiddleware(req, res, (err) => {
if (err) reject(err);
else resolve();
});
});
};
}
// Use existing Express auth middleware in Fastify during migration
fastify.addHook('onRequest', expressToFastify(authMiddleware));This lets you migrate one route at a time, not entire services atomically—dramatically reducing risk.
3Architecture evolution
Breaking down monoliths or evolving system architecture:
Design service extraction plan:
Current: Monolith @legacy-api (47,000 lines)
Target: Extract user management into @user-service
Analysis:
1. Identify all user-related code (models, routes, business logic)
2. Find dependencies (what else uses user code?)
3. Design API contract for new service
4. Plan data migration strategy (shared DB vs separate?)
5. Identify shared utilities to extract first
6. Generate strangler fig pattern implementation
Create:
- Service boundaries diagram
- API contract (OpenAPI)
- Database migration plan
- Rollout strategy (feature flags, traffic splitting)
- Rollback procedureThe strangler fig pattern lets you gradually route requests to the new service while keeping the monolith as a fallback—AI generates both the new service and the routing logic.
Execution playbook for system-wide changes
Phase 1: Discovery and mapping (1-2 days)
- Inventory all affected services/components
- Identify dependencies and integration points
- Map data flows and API contracts
- Assess test coverage and quality gates
Phase 2: Design and ADR (1-2 days)
- Write Architecture Decision Record (ADR) documenting approach
- Define success metrics and rollback criteria
- Design migration strategy (big-bang vs incremental)
- Establish monitoring and observability requirements
Phase 3: Generate and validate (3-5 days)
- Use AI to generate codemods, adapters, and migration code
- Test on one representative service thoroughly
- Validate performance impact (benchmarks)
- Run security scans
- Get senior/principal review
Phase 4: Execute incrementally (1-2 weeks)
- Roll out in waves (low-risk services first)
- Monitor metrics at each wave (error rates, latency, throughput)
- Use feature flags for instant rollback
- Collect feedback and adjust approach
- Document learnings for subsequent waves
Phase 5: Validation and cleanup (3-5 days)
- Run full integration test suite
- Conduct load testing
- Security audit
- Update documentation
- Post-migration retrospective
This structured approach, combined with AI execution, reduces multi-month projects to weeks while maintaining quality and safety.
Creating Reusable Prompt Libraries for Teams
Senior engineers scale their expertise by turning knowledge into reusable assets. With AI, this means building prompt libraries that encode your experience and architectural decisions into templates others can use.
Prompt library structure
/team-prompts/
├── README.md (usage guide, conventions)
├── architecture/
│ ├── adr-generator.md
│ ├── service-extractor.md
│ ├── migration-planner.md
│ └── api-design.md
├── refactoring/
│ ├── extract-service.md
│ ├── optimize-queries.md
│ ├── add-resilience.md
│ └── modernize-async.md
├── security/
│ ├── audit-auth.md
│ ├── add-rate-limiting.md
│ ├── fix-injection.md
│ └── secure-api.md
├── observability/
│ ├── add-logging.md
│ ├── add-metrics.md
│ ├── add-tracing.md
│ └── create-dashboard.md
└── metadata.json (tracking usage, owners, versions)Example: ADR Generator Prompt
# Generate Architecture Decision Record (ADR)
## Purpose
Creates comprehensive ADR for major architectural decisions
## Usage
Generate ADR for [decision]:
Context: @[relevant-docs] @[existing-architecture]
Decision: [What we're choosing to do]
Stakeholders: [Who's involved/affected]
## Template
The prompt will generate:
**Status**: Proposed | Accepted | Deprecated | Superseded
**Context**
- What's the situation?
- What forces are at play?
- What constraints exist?
**Decision**
- What did we choose and why?
- What's the high-level approach?
**Considered Alternatives**
For each alternative:
- Description
- Pros
- Cons
- Why rejected
**Consequences**
- What becomes easier?
- What becomes harder?
- What new capabilities emerge?
- What technical debt are we accepting?
**Implementation Plan**
1. Phase 1: [steps]
2. Phase 2: [steps]
3. Success metrics
4. Timeline estimate
**Risks and Mitigations**
- Risk 1: [description] → Mitigation: [approach]
- Risk 2: [description] → Mitigation: [approach]
**References**
- Related ADRs
- Documentation
- External resources
## Example
Generate ADR for: Move from REST to GraphQL for our API
Context: @docs/api-architecture.md @services/
Decision: Migrate public API to GraphQL while maintaining REST for internal services
Stakeholders: Frontend team, Mobile team, API team, External partners
## Version
v2.1 - Last updated: 2025-10-15
Owner: @senior-backend-teamExample: Resilience Refactoring Prompt
# Add Resilience Patterns to Service
## Purpose
Systematically adds circuit breakers, retries, timeouts, and graceful degradation
## Usage
Refactor @[service] to add resilience:
Focus on:
- External API calls
- Database queries
- Message queue operations
- File system access
Add patterns:
1. Circuit breakers (fail after N consecutive failures)
2. Retry with exponential backoff
3. Timeouts for all I/O operations
4. Graceful degradation (fallbacks)
5. Health checks
6. Observability (metrics, logging, tracing)
Follow patterns from @services/resilient-example
## Generated Artifacts
- Circuit breaker wrapper for API calls
- Retry decorator with backoff
- Timeout configuration
- Fallback implementations
- Health check endpoints
- Monitoring dashboard config
- Tests for failure scenarios
## Quality Checklist
After generation, verify:
- [ ] Circuit breaker thresholds reasonable (failure rate, timeout)
- [ ] Retry policy appropriate (max attempts, backoff strategy)
- [ ] Timeouts set per operation (not global)
- [ ] Fallback behavior documented
- [ ] Metrics collected for all failure modes
- [ ] Tests cover: success, timeout, retry, circuit open
## Example
Refactor @payment-service to add resilience:
External dependencies:
- Stripe API (payment processing)
- Fraud detection service
- Email notifications (SendGrid)
- Database (PostgreSQL)
Current issues:
- No retries on transient failures
- Stripe timeout crashes entire checkout
- No circuit breaker (cascading failures)
Expected improvements:
- 99.9% uptime (current: 97.2%)
- Graceful degradation when Stripe slow
- Fraud check failures don't block orders
## Version
v3.0 - Last updated: 2025-10-12
Owner: @platform-teamPrompt library governance
Ownership and maintenance
- Each prompt has a designated owner (responsible for updates)
- Quarterly review cycle (mark stale prompts for update/retirement)
- Usage analytics tracked (which prompts are most valuable)
- Version control (track changes, rollback if needed)
Quality standards
# .prompt-standards.yml
required_sections:
- purpose
- usage_example
- expected_output
- quality_checklist
- version
- owner
review_process:
new_prompts:
- author_test: "Must test on 3 real scenarios"
- peer_review: "2 senior approvals required"
- documentation: "Complete usage guide"
updates:
- backward_compatibility: "Flag breaking changes"
- migration_guide: "If behavior changes significantly"
- version_bump: "Semantic versioning"
retirement_criteria:
- unused_for: "90 days"
- replaced_by: "Better prompt exists"
- deprecated_tech: "Targets obsolete framework"Measuring prompt effectiveness
// scripts/prompt-analytics.js
const promptUsage = {
'adr-generator': {
usageCount: 47,
avgRating: 4.8,
avgTimeSaved: '45 minutes',
successRate: '92%', // Output accepted without major changes
lastUsed: '2025-10-14',
owner: '@senior-team'
},
'add-resilience': {
usageCount: 23,
avgRating: 4.6,
avgTimeSaved: '2 hours',
successRate: '87%',
lastUsed: '2025-10-15',
owner: '@platform-team'
}
};
// Monthly report
generateReport({
topPrompts: sortBy(usageCount).slice(0, 10),
timeSaved: sum(avgTimeSaved * usageCount),
lowPerformers: filter(successRate < 70),
stalePrompts: filter(lastUsed > 90_days_ago)
});Tracking metrics shows which prompts deliver value and which need improvement or retirement.
Prompt Recipes for Senior-Level Tasks
Recipe 1: System-Wide Migration Planner
Plan migration: [source tech] → [target tech] across services
Services: [list all affected services]
Generate comprehensive plan:
1. **Impact Analysis**
- Breaking changes per service
- Risk level (low/medium/high)
- Dependencies between services
- Shared libraries needing updates
2. **Migration Strategy**
- Big-bang vs incremental
- Service migration sequence
- Rollback approach at each phase
- Feature flags / traffic splitting
3. **Technical Approach**
- Code changes required
- Codemods / automation opportunities
- Database schema changes
- Configuration updates
4. **Testing Strategy**
- Unit test updates
- Integration test additions
- Performance benchmarks
- Security scans
5. **Rollout Plan**
- Phase 1: [services] (weeks 1-2)
- Phase 2: [services] (weeks 3-4)
- Success criteria per phase
- Monitoring / alerting
6. **Risk Mitigation**
- Top 5 risks with mitigations
- Rollback procedures
- Communication plan
Deliverable: Markdown document + checklist per serviceRecipe 2: Extract Microservice from Monolith
Design service extraction plan:
Monolith: @[monolith-repo]
Target service: [service-name] for [domain]
Analysis:
1. Identify all code related to [domain]
- Models / data access
- Business logic
- API routes
- Background jobs
- Tests
2. Map dependencies
- What depends on this code?
- What does this code depend on?
- Shared utilities to extract first
- Database tables to migrate
3. Design API contract
- REST/GraphQL endpoints
- Request/response formats
- Error handling
- Authentication/authorization
4. Data migration strategy
- Separate database vs shared
- Migration scripts
- Zero-downtime approach
- Data consistency guarantees
5. Strangler fig implementation
- Routing layer (proxy pattern)
- Feature flags
- Gradual traffic shift
- Rollback procedure
Generate:
- Service scaffolding
- API contract (OpenAPI)
- Migration scripts
- Routing/proxy logic
- Tests (unit + integration)
- Deployment config
- Rollback runbookRecipe 3: Add Comprehensive Observability
Add observability to @[service]:
Current state:
- Logging: [current approach]
- Metrics: [current approach]
- Tracing: [current approach]
Target state:
- **Logging**: Structured JSON logs with:
- requestId (trace requests across services)
- userId (for user-specific debugging)
- Operation context (what was being done)
- Error details (stack traces, error codes)
- Performance data (duration, memory)
- **Metrics**: Prometheus/StatsD metrics for:
- Request rate (requests/sec)
- Error rate (errors/sec, by type)
- Latency (p50, p95, p99)
- Business metrics (orders/sec, revenue)
- Resource usage (CPU, memory, connections)
- **Tracing**: Distributed tracing with:
- OpenTelemetry instrumentation
- Span creation for all operations
- Context propagation across services
- Integration with existing traces
- **Dashboards**: Grafana dashboards showing:
- System health (RED metrics)
- Business KPIs
- Error breakdown by type/endpoint
- Latency heatmaps
- Anomaly detection
- **Alerts**: Alert rules for:
- Error rate >1%
- Latency p99 >500ms
- Availability <99.9%
- Resource exhaustion
Generate:
- Logging middleware
- Metrics collection code
- Tracing setup
- Dashboard configs
- Alert rules
- Runbook for common issuesRecipe 4: Security Hardening Audit
Conduct security audit of @[service]:
Focus areas:
1. **Authentication/Authorization**
- JWT validation (algorithm, expiry, refresh)
- Session management (timeouts, invalidation)
- Role-based access control (RBAC)
- API key security
2. **Input Validation**
- SQL injection vectors
- XSS opportunities
- Command injection risks
- Path traversal vulnerabilities
- Deserialization attacks
3. **Data Protection**
- Sensitive data encryption (at rest, in transit)
- PII handling compliance
- Password hashing (algorithm, salt, rounds)
- API secrets management
4. **Rate Limiting & DoS**
- Rate limits on all endpoints
- Resource exhaustion protection
- Slowloris/slowpost defenses
5. **Dependencies**
- Known CVEs in dependencies
- Outdated packages
- Unused dependencies
For each issue found:
- Severity (critical/high/medium/low)
- Exploit scenario (how to attack)
- Fix implementation (code changes)
- Test to prevent regression
Generate:
- Security audit report
- Prioritized fix list
- Code changes for each issue
- Security tests
- Updated security documentationAvoiding the "Elegant but Fragile" Trap
AI excels at generating clean, elegant-looking code. The problem: elegance often masks fragility. Code that handles the happy path beautifully but crashes on the first unexpected input.
Common fragility patterns
1. Pattern misapplication
AI suggests a beautiful one-liner:
// Elegant!
const userEmails = users.map(u => u.profile.email).join(', ');Looks perfect. Crashes when:
- A user has no profile (
profileis undefined) - A profile has no email (
emailis undefined) - The users array is empty (harmless but edge case)
Senior engineer's fix:
// Resilient
const userEmails = users
.filter(u => u.profile?.email)
.map(u => u.profile.email)
.join(', ') || 'No emails available';2. Missing business context
AI generates perfect CRUD operations for a Product model:
async function deleteProduct(productId) {
await db.query('DELETE FROM products WHERE id = ?', [productId]);
return { success: true };
}Technically correct. Practically disastrous:
- What if the product is in active orders?
- Should we soft-delete instead of hard-delete?
- What about product images in S3?
- Audit trail requirements?
- Inventory adjustments needed?
AI doesn't know these business rules. You do.
3. Edge case blindness
AI implements authentication:
async function verifyToken(token) {
const decoded = jwt.verify(token, SECRET_KEY);
return { userId: decoded.userId, valid: true };
}Doesn't handle:
- Token expiration (crashes on expired tokens)
- Invalid token format (crashes on malformed input)
- Key rotation (uses only one key)
- Clock skew (time-based validation issues)
- Revocation (doesn't check blacklist)
Senior review catches these:
async function verifyToken(token) {
try {
// Verify with current and previous keys (rotation support)
let decoded;
for (const key of [CURRENT_KEY, PREVIOUS_KEY]) {
try {
decoded = jwt.verify(token, key, {
algorithms: ['RS256'],
clockTolerance: 30 // Handle 30s clock skew
});
break;
} catch (e) {
continue; // Try next key
}
}
if (!decoded) {
throw new InvalidTokenError('Token verification failed');
}
// Check revocation list
if (await isTokenRevoked(decoded.jti)) {
throw new RevokedTokenError('Token has been revoked');
}
return { userId: decoded.userId, valid: true };
} catch (error) {
if (error.name === 'TokenExpiredError') {
throw new ExpiredTokenError('Token has expired');
}
throw new InvalidTokenError('Invalid token');
}
}Mitigation strategies
Failure-first reviews
When reviewing AI code, ask: "What happens when this fails?" before asking "Does this work?"
Questions to ask:
- What if the input is null/undefined/empty?
- What if the API call times out?
- What if the database is down?
- What if two requests race?
- What if the user provides malicious input?
- What happens at scale (1000x normal load)?
Stress testing and property-based testing
Don't just test happy paths. Generate edge cases:
Generate property-based tests for this function:
@[function]
Test invariants:
- Function never crashes (returns error instead)
- Output type always matches expected type
- Side effects are idempotent
- Performance stays under 100ms even with large inputs
Use [fast-check / QuickCheck / Hypothesis] to generate
random inputs that stress-test the implementation.Feature flags and canary releases
Never ship AI-generated changes directly to 100% of traffic:
# Deployment strategy
phases:
- name: canary
traffic: 5%
duration: 1 hour
success_criteria:
- error_rate < 0.1%
- p95_latency < 200ms
- no_critical_alerts
- name: rollout
traffic: 50%
duration: 2 hours
success_criteria: [same as above]
- name: complete
traffic: 100%
rollback_triggers:
- error_rate > 1%
- p99_latency > 1000ms
- critical_alert_firedShift-right validation
Don't rely solely on pre-production testing. Use runtime checks:
- DAST (Dynamic Application Security Testing): Run security scans against live staging environments
- Chaos testing: Inject failures (kill services, delay responses, corrupt data) and verify graceful degradation
- Synthetic monitoring: Continuously test critical flows in production
- Real user monitoring (RUM): Track actual user experiences, catch issues tests miss
AI code is a polished draft. Your job as a senior engineer is ensuring it's production-ready: resilient to failures, secure against attacks, performant under load, and maintainable over time.
Key Takeaways
ℹ️Orchestration, not implementation
ℹ️Force multiplication through knowledge sharing
.cursorrules, and codemods can 10x the productivity of an entire team.ℹ️System-wide changes become tractable
⚠️Elegant code isn't production-ready code
ℹ️Shift from typing speed to thinking speed
ℹ️Incremental rollouts prevent disasters
The next chapter explores how principal engineers operate at an even higher level—using Cursor to drive architectural shifts, encode organizational knowledge, and mentor entire teams through prompt packs and frameworks. The principles remain consistent: AI handles execution, humans handle judgment. The scope just continues to expand.