Chapter 11

Pitfalls & Anti-Patterns

AI accelerates everything—including mistakes. The same tool that helps us ship features in hours can introduce vulnerabilities, create unmaintainable code, and erode team skills just as quickly.

⚠️
The Speed-Quality Paradox

This chapter isn't about avoiding AI. It's about using it without shooting ourselves in the foot. We'll cover the three most dangerous anti-patterns plaguing Cursor-first teams, why they happen, and—most importantly—how to set up guardrails that catch problems before they hit production.

11.1 "Cursor Wrote It, So It Must Be Right"

The Blind Trust Trap

This is the most common—and most dangerous—pitfall. A developer asks Cursor to implement a feature, the AI generates 100 lines of clean-looking code with detailed comments, and the developer merges it without truly understanding what it does.

The psychology: AI output looks polished. Variable names are descriptive, functions are properly formatted, and the code compiles. Our brains interpret polish as correctness. It's not.

The reality: Research shows that 45% of AI-generated code contains subtle flaws, and 59% of developers admit to using code they don't fully understand. That's not a recipe for quality—it's a recipe for production incidents.

AI Code with Flaws

45%

Research finding: AI-generated code contains subtle flaws

Code Not Understood

59%

Industry survey: Developers using code they don't understand

Real-World Example: The Authentication Bypass

A mid-level developer asked Cursor to add API authentication:

// AI-generated code (looks professional)
async function authenticateRequest(req, res, next) {
  const token = req.headers.authorization?.split(' ')[1];
  
  if (token) {
    try {
      const decoded = jwt.verify(token, process.env.JWT_SECRET);
      req.user = decoded;
      next();
    } catch (err) {
      res.status(401).json({ error: 'Invalid token' });
    }
  } else {
    // Log the attempt and continue
    console.log('No token provided, allowing anonymous access');
    next(); // ❌ SECURITY HOLE
  }
}

What Went Wrong

The developer didn't notice that the else branch calls next(), allowing unauthenticated requests through. Cursor generated "flexible" authentication that degrades gracefully—but the requirement was to block unauthenticated access. The code looked right. The comments were clear. But it fundamentally failed the security requirement.

How it should look:

async function authenticateRequest(req, res, next) {
  const token = req.headers.authorization?.split(' ')[1];
  
  if (!token) {
    return res.status(401).json({ error: 'Authentication required' });
  }
  
  try {
    const decoded = jwt.verify(token, process.env.JWT_SECRET);
    req.user = decoded;
    next();
  } catch (err) {
    res.status(401).json({ error: 'Invalid token' });
  }
}

Lesson Learned

AI doesn't know our security requirements. We do.

Detection Signals

Our team might have a blind trust problem if we see:

  • Fast merges: PRs with 200+ lines of AI code merged in under 10 minutes with minimal test additions
  • Post-merge bug spike: Defects caught in QA or production rather than during development
  • Vague PR descriptions: "Fixed the bug" or "Added feature X" without explaining how or why
  • Missing edge case tests: Only happy-path tests, no error handling or boundary conditions
  • Repeated security scan failures: CI catches vulnerabilities that human reviewers missed

The Fix: Trust, But Verify

1. The "Explain It Back" Rule

Before merging any AI-generated code block, we must be able to explain:

  • What it does (line by line if complex)
  • Why this approach was chosen
  • What could go wrong (edge cases, failures)
  • How it integrates with existing code

ℹ️
Golden Rule

If we can't explain it, we don't merge it.

2. Mandatory Test Coverage

## PR Checklist
- [ ] All AI-generated code has unit tests
- [ ] Edge cases tested (null, empty, malformed input)
- [ ] Error handling verified with failing tests
- [ ] Integration points validated
- [ ] Security review completed for auth/data handling

No exceptions. AI can write tests—ask it to. Then run them.

3. Annotation Requirement

Tag AI contributions explicitly in commits and PRs:

git commit -m "Add rate limiting to login endpoint

AI-generated: src/middleware/rateLimiter.js (lines 1-67)
Human-written: src/middleware/rateLimiter.js (lines 68-85)
Tests: tests/middleware/rateLimiter.test.js (AI-assisted)

Validated by: @senior-dev (2025-10-08)"

Why annotation matters:

  • Creates accountability trail for audits
  • Enables measuring AI vs. human defect rates
  • Helps future developers understand code origin

11.2 AI Spaghetti Code

The Technical Debt Multiplier

Traditional spaghetti code—tangled, hard-to-follow logic—forms slowly as developers take shortcuts under pressure. AI spaghetti code forms instantly, at scale, and is harder to detect because it looks superficially clean.

The problem: AI generates code that works but doesn't fit our architecture. Each AI contribution is locally correct but globally incoherent. Over time, our codebase becomes a patchwork of inconsistent patterns.

Anatomy of AI Spaghetti

1. Duplication Explosion

The pattern: Teams report an 8x increase in duplicated code blocks when using AI assistants heavily. Why? AI doesn't know about our existing helper functions—it regenerates similar logic from scratch each time.

Example scenario:

// File: userService.js (AI-generated last week)
function validateEmail(email) {
  const regex = /^[^\s@]+@[^\s@]+\.[^\s@]+$/;
  return regex.test(email);
}

// File: authService.js (AI-generated today)
function isValidEmail(email) {
  const emailPattern = /^[^\s@]+@[^\s@]+\.[^\s@]+$/;
  return emailPattern.test(email);
}

// File: profileService.js (AI-generated yesterday)
function checkEmailFormat(email) {
  return /^[^\s@]+@[^\s@]+\.[^\s@]+$/.test(email);
}

⚠️
The Problem

Three functions, three names, same logic. Update the regex? We'll need to find and fix all three—and hope we don't miss any.

The fix: Before asking AI to generate code, search our codebase:

Create a validation helper that checks email format.

First, search @codebase for existing email validation.
If found, refactor to use the existing function.
If not found, create reusable utility in @utils/validation.js

2. Inconsistent Architecture

The pattern: AI generates code in whatever style it learned from training data, not our project's patterns.

Example: We have a clean layered architecture (Controller → Service → Repository), but AI starts putting database queries directly in controllers:

// Our existing pattern (clean)
async function getUser(req, res) {
  const user = await userService.findById(req.params.id);
  res.json(user);
}

// AI-generated (architectural violation)
async function getOrder(req, res) {
  const order = await db.query(
    'SELECT * FROM orders WHERE id = ?', 
    [req.params.id]
  );
  res.json(order);
}

The AI code works. But it bypasses our service layer, making logging, caching, and business logic enforcement impossible.

The fix: Enforce architecture through .cursorrules:

## Architecture Rules

### Layering (MANDATORY)
- Controllers: HTTP handling only, delegate to services
- Services: Business logic, orchestrate repositories
- Repositories: Data access only, no business logic

### Database Access
- NEVER query database directly from controllers
- ALWAYS use repository layer: @repositories/

### When generating code:
- Reference existing patterns with @
- Ask: "Does this fit our 3-layer architecture?"
- Verify: "Is this logic in the right layer?"

3. Complexity Creep

The pattern: AI tends toward over-engineering. It generates comprehensive solutions when simple ones suffice.

Example request: "Add a feature flag for the new dashboard."

AI generates (150 lines):

  • Custom feature flag manager class
  • Database schema for flag storage
  • Admin UI for flag management
  • Caching layer for flag lookups
  • Audit logging for flag changes

What we actually needed (5 lines):

const FEATURE_FLAGS = {
  NEW_DASHBOARD: process.env.ENABLE_NEW_DASHBOARD === 'true'
};

The fix: Be explicit about scope:

Add a simple boolean feature flag for new dashboard.

Requirements:
- Use environment variable ENABLE_NEW_DASHBOARD
- Store in config/features.js
- Keep it under 10 lines
- We'll add database storage later if needed

Measuring AI Spaghetti

Track these metrics to catch technical debt early:

MetricHealthy RangeWarning Sign
Code Churn< 30% changed within 30 days> 50% (code doesn't stick)
Duplication Rate< 5% duplicated blocks> 15% (copy-paste generation)
Cyclomatic ComplexityAverage < 10 per functionTrending up (over-complicating)
PR Size100-300 lines averageMany 500+ line AI dumps

⚠️
Red Flag

If code churn doubled after AI adoption, we're generating code faster than we're stabilizing it.

11.3 Prompt Chaos (Everyone Prompting Differently)

The Standardization Crisis

Imagine if every developer on our team wrote JavaScript differently—some used semicolons, others didn't; some preferred const, others used var. Code reviews would be nightmares, and bugs would multiply.

That's what's happening with prompts. Without standardization, our team generates inconsistent code quality, and effective techniques stay locked in individual workflows.

Standardized Prompts

89%

Satisfaction rate vs 34% without standards

Quality Improvement

2.6x

From prompt discipline alone

ℹ️
The Impact

Organizations with standardized prompting report 89% satisfaction with AI outputs, compared to 34% without standards. That's a 2.6x difference in quality from prompt discipline alone.

Common Manifestations

1. Individual Improvisation

Developer A:

add authentication

Developer B:

Implement JWT-based authentication middleware for Express.js.

Requirements:
- Use RS256 algorithm
- Token expiry: 1 hour
- Refresh token: 7 days
- Rate limiting: 5 attempts/min
- Follow patterns from @middleware/auth.js
- Include unit tests

Quality Difference

Guess whose code is higher quality? Detailed prompts with context produce significantly better results.

2. Context Overload

❌ Bad prompt (too much context):

Here's our entire 500-line user service, plus the database 
schema, plus our auth module, plus our error utilities. 
Now add email verification and password reset and two-factor 
auth and admin impersonation and audit logging and...

AI response: Confused, generic, or hallucinated. Information overload makes AI lose focus.

✅ Better prompt (focused context):

Add email verification to registration flow.

Context: @services/userService.js @utils/email.js

Requirements:
- Generate 6-digit code
- Store with 15-min expiry in Redis
- Send via existing email utility
- Add /verify-email endpoint
- Include tests for expired codes

3. Knowledge Silos

Sarah discovers that including "Explain your reasoning" in prompts yields better code. She uses it religiously. Tom has no idea. The rest of the team keeps getting mediocre output.

Without sharing, expertise doesn't scale.

The Fix: Prompt Library + Standards

1. Required Prompt Template

## Standard Prompt Format

**Context**: [What files/components are relevant]
**Task**: [What specifically to build/change]
**Requirements**: [Technical constraints, patterns to follow]
**Output**: [What format, tests, documentation]

Example:

Context: @services/paymentService.js @utils/stripe.js

Task: Add refund processing for failed payments

Requirements:
- Use Stripe refund API
- Handle partial refunds
- Add idempotency keys
- Follow error handling from @services/orderService.js

Output: Function + unit tests + integration test with Stripe mock

2. Prompt Linting

Create a simple linter that checks prompts before execution:

// .cursor/prompt-lint.js
const rules = {
  minLength: 20,              // No lazy one-word prompts
  requiresContext: true,      // Must include @ references
  requiresOutput: true,       // Must specify expected output
  maxContextFiles: 5          // Prevent overload
};

function lintPrompt(prompt) {
  const issues = [];
  
  if (prompt.length < rules.minLength) {
    issues.push('Prompt too vague. Add specific requirements.');
  }
  
  if (rules.requiresContext && !prompt.includes('@')) {
    issues.push('Missing context. Reference files with @');
  }
  
  return issues;
}

3. Prompt Performance Tracking

Track which prompts produce accepted code vs. rejected:

Prompt IDUsageAcceptanceAvg Review Time
crud-endpoint4789%12 min
security-review3176%18 min
quick-fix8952%25 min ⚠️

⚠️
Action Needed

The "quick-fix" prompt has low acceptance and high review time—needs refinement or retirement.

11.4 How to Set Up Guardrails

Guardrails aren't about slowing down—they're about maintaining quality at speed. Here's a practical, phased approach.

Phase 1: Fast Wins (This Week)

1. Update PR Template

## Pull Request

### Changes
[Describe what changed and why]

### AI Involvement
- **Tool**: Cursor v0.42
- **AI-Generated Code**:
  - File: `src/auth.js` (lines 45-120)
  - File: `tests/auth.test.js` (lines 1-67)
- **Human Modifications**: [Changes to AI output]

### Validation
- [ ] Code reviewed and understood
- [ ] Tests written and passing
- [ ] Security scan passed
- [ ] Reviewed by: @reviewer-name

### Security Checklist (if applicable)
- [ ] Input validation present
- [ ] SQL queries parameterized
- [ ] Authentication verified
- [ ] No secrets in code

2. Enable Security Scanning

Add to CI pipeline:

# .github/workflows/security.yml
name: Security Scan
on: [pull_request]

jobs:
  security:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      
      # Secret scanning
      - name: Check for secrets
        uses: trufflesecurity/trufflehog@main
      
      # Dependency vulnerabilities
      - name: Snyk scan
        uses: snyk/actions/node@master
        env:
          SNYK_TOKEN: ${{ secrets.SNYK_TOKEN }}

3. Mandatory Test Coverage

- name: Check test coverage
  run: |
    npm test -- --coverage
    COVERAGE=$(cat coverage/coverage-summary.json | jq '.total.lines.pct')
    if (( $(echo "$COVERAGE < 80" | bc -l) )); then
      echo "❌ Coverage below 80% ($COVERAGE%)"
      exit 1
    fi

Phase 2: Medium-Term (Next Month)

1. Architectural Linting

Enforce layer separation:

// .eslintrc.js
module.exports = {
  rules: {
    'no-restricted-imports': ['error', {
      patterns: [{
        group: ['**/repositories/*'],
        message: 'Controllers must not import repositories. Use services.'
      }, {
        group: ['**/database/*'],
        message: 'Services must not access database. Use repositories.'
      }]
    }]
  }
};

2. AI Code Review Integration

- name: AI Code Review
  run: |
    cursor-agent review \
      --prompt "Review this PR for:
      - Security vulnerabilities (OWASP Top 10)
      - Performance issues (N+1 queries, memory leaks)
      - Code duplication
      - Missing error handling
      Flag only high-severity issues."

The Guardrail Maturity Model

Track team maturity:

LevelCharacteristicsRisk Level
Level 0: Wild WestNo standards, no review, blind merging🔴 Critical
Level 1: Basic HygienePR templates, tests required, security scanning🟡 Moderate
Level 2: StructuredPrompt library, architectural linting, AI review🟢 Low
Level 3: OptimizedFull provenance, performance tracking, continuous improvement🟢 Very Low

ℹ️
Maturity Goal

Reach Level 2 within 3 months of AI adoption. Level 3 is for teams treating AI as strategic infrastructure.

Key Takeaways

Trust But Verify

AI output is a draft, not a final product. Always understand what it does, why, and what could go wrong.

Architecture Beats Speed

Fast, inconsistent code creates more work than slow, coherent code. Enforce architecture through .cursorrules and linting.

Standardize Prompts

Quality variance across teams comes from prompt variance. Build a library, enforce usage, track performance.

Guardrails Enable Speed

Teams with strong guardrails move faster because they catch issues early, not late. Invest in CI, scanning, and review processes.

Measure AI Impact

Track defect rates, code churn, and duplication. If these spike after AI adoption, guardrails need strengthening.

The Bottom Line

The teams that master AI plus guardrails don't just ship faster—they ship better. The key is recognizing that AI is a powerful tool that requires discipline, not a magic solution that replaces engineering judgment.