Chapter 4

Level 1: Junior Developer

Sarah graduated with a computer science degree three months ago. She knows algorithms, data structures, and software engineering principles. But when tasked with implementing password reset functionality in the team's production codebase, she spent six hours fighting TypeScript compilation errors, misunderstanding the existing auth patterns, and writing tests that didn't actually test anything meaningful.

Then her mentor showed her Cursor. Within two hours, Sarah had working password reset logic with comprehensive tests—but more importantly, she understood why the code worked. The AI explained TypeScript's type narrowing, showed her the team's authentication patterns, and generated tests that actually caught edge cases she hadn't considered.

This is Cursor's power for junior developers: it's not a shortcut around learning—it's a learning accelerator. Used correctly, it compresses the junior-to-mid-level transition from years to months. Used poorly, it creates developers who can generate code but can't debug, can't reason about tradeoffs, and panic when the AI isn't available.

This chapter shows juniors how to use Cursor as a coding buddy who explains, teaches, and accelerates growth without creating dependency.

The Right Mindset: Coach, Not Hands

The critical distinction: Cursor is not your hands. It doesn't type code for you while you watch passively. Cursor is your coach—helping you learn, understand, and refine while you remain responsible for correctness.

❌ The Wrong Approach

Me: "Build a REST API for user management"

Cursor: [Generates 500 lines of code]

Me: [Copies everything, runs it, ships it]

You just generated code you don't understand. When it breaks—and it will—you're helpless.

✅ The Right Approach

Me: "Explain the key components of a REST API for user management. What routes do I need?"

Cursor: [Explains CRUD operations, HTTP methods, status codes]

Me: "Good. Now generate just the route definitions—no implementation yet."

Me: "Now write tests for this middleware before implementing the actual authentication logic."

ℹ️
Key Principle

This approach takes longer initially but builds genuine understanding. You're not just getting working code—you're learning the patterns, understanding the architecture, and building mental models that transfer to future work.

Writing Functions: The Learn-by-Doing Loop

As a junior, you'll spend significant time writing utility functions, simple scripts, and small features. Here's a workflow that maximizes learning while leveraging AI speed.

Step 1: Describe the Function's Purpose in Plain English

Before touching code, write what the function should do:

// I need a function that validates email addresses.
// Requirements:
// - Accept string input
// - Return true for valid emails, false for invalid
// - Handle edge cases: empty string, null, missing @ or domain
// - No external dependencies (use regex)

This forces you to think through requirements before implementation—a skill that matters far more than syntax knowledge.

Step 2: Ask Cursor to Generate a Basic Structure

Prompt example:

Create an email validation function based on these requirements:
@comments/above

Include:
- TypeScript types
- JSDoc comments explaining the regex pattern
- Basic structure only—I want to understand it before finishing

Cursor generates:

/**
 * Validates email address format using RFC 5322 simplified regex
 * Pattern breakdown:
 * - ^[^\s@]+ : One or more characters that aren't whitespace or @
 * - @ : Literal @ symbol
 * - [^\s@]+ : Domain name (no whitespace or @)
 * - \. : Literal dot
 * - [^\s@]+$ : Top-level domain (no whitespace or @)
 */
function validateEmail(email: string): boolean {
  const emailRegex = /^[^\s@]+@[^\s@]+\.[^\s@]+$/;
  return emailRegex.test(email);
}

Step 3: Request Explanations for Each Component

Don't accept the code blindly. Ask:

Explain this regex pattern character by character. Why is this pattern considered "RFC 5322 simplified"? What valid emails would it reject? What invalid emails might it accept?

Cursor explains the tradeoffs. You learn that this pattern is good enough for UI validation but not perfect—it accepts some technically invalid emails (like test@domain without TLD) and rejects some valid ones (like user+tag@domain.com).

Step 4: Modify and Test Incrementally

Request improvements and tests:

The current regex rejects emails with + symbols (used for email aliasing). Update the pattern to allow + in the local part. Explain what changed and why.

Now write tests:

Write Jest tests for validateEmail covering:
- Valid standard email (user@domain.com)
- Valid email with + symbol (user+tag@domain.com)
- Invalid: missing @
- Invalid: missing domain
- Invalid: empty string
- Edge case: null input (should return false, not crash)

Run the tests. Some fail. This is good—you're learning what edge cases matter.

Step 5: Ask for Alternatives and Compare Tradeoffs

Show me three different approaches to email validation:
1. Simple regex (current)
2. More comprehensive regex (RFC 5322 compliant)
3. Using a library (email-validator)

For each: explain pros/cons, when to use, what it catches/misses.

This teaches you there's no single "correct" solution—only tradeoffs based on context. Simple regex is fast but imperfect. Libraries are comprehensive but add dependencies. This thinking process is what separates junior from mid-level developers.

Debugging Strategy: From Panic to Process

Junior developers often approach debugging emotionally: panic when something breaks, randomly change things hoping it fixes itself, give up after thirty minutes. AI can transform this into a systematic process.

Phase 1: Understand the Error

Don't immediately ask for a fix. First, ask for understanding: What does this error mean? What conditions cause it?

When you encounter an error, ask:

I'm getting this error:
TypeError: Cannot read property 'id' of undefined
  at UserService.getProfile (src/services/user.js:45)

Before suggesting fixes, explain:
1. What this error means in plain English
2. What conditions cause this specific error
3. What the code at line 45 is trying to do

Cursor explains: "This error means you're trying to access the id property on something that's undefined. At line 45, you're probably doing user.id when user is undefined. This typically happens when a database query returns nothing or an API call fails."

Now you understand the category of problem (accessing properties on undefined values) rather than just this specific instance.

Phase 2: Analyze Root Cause

Walk through execution flow. Why would the value be undefined? What could go wrong upstream?

Phase 3: Fix with Understanding

Request: defensive checks, appropriate errors, and test cases to prevent regression

Example fix with guard clause:

async getProfile(userId: string): Promise<UserProfile> {
  const user = await db.query('SELECT * FROM users WHERE id = ?', [userId]);
  
  // Guard clause - fail fast if user doesn't exist
  if (!user) {
    throw new UserNotFoundError(`User ${userId} does not exist`);
  }
  
  return {
    id: user.id,
    name: user.name,
    email: user.email
  };
}

The Learning Layer

Through this process, you learned:

  • What undefined errors mean
  • Why null checks matter
  • When to use guard clauses
  • How to throw meaningful errors
  • How to test error conditions

Next time you encounter a similar error, you'll recognize it immediately and know how to fix it—without AI.

Learning Language Idioms Through Guided Practice

Every language has idiomatic patterns—Pythonic loops, JavaScript promises, Go error handling. Juniors write verbose, clunky code because they don't know these idioms. Cursor can teach them quickly.

The pattern: Show me → Explain why → Practice myself

Example 1: Python List Comprehensions

You write:

numbers = [1, 2, 3, 4, 5]
squares = []
for num in numbers:
    squares.append(num ** 2)

Ask Cursor for the Pythonic way:

# Pythonic version using list comprehension
numbers = [1, 2, 3, 4, 5]
squares = [num ** 2 for num in numbers]

# Why it's better:
# 1. More concise (2 lines instead of 4)
# 2. Clearer intent: "make a list of squares"
# 3. Slightly faster (optimized by Python internally)
# 4. Reduces variable pollution (no squares = [])

Example 2: JavaScript async/await

Promise chains:

function getUserPosts(userId) {
  return db.getUser(userId)
    .then(user => api.getPosts(user.id))
    .then(posts => posts.filter(p => p.published))
    .catch(error => {
      console.error(error);
      throw error;
    });
}

Cleaner with async/await:

async function getUserPosts(userId) {
  try {
    const user = await db.getUser(userId);
    const posts = await api.getPosts(user.id);
    return posts.filter(p => p.published);
  } catch (error) {
    console.error('Failed to get user posts:', error);
    throw error;
  }
}

ℹ️
Practice Exercise

Take three functions you wrote this week and ask Cursor to show the idiomatic version for your language. Understand why the idiom exists, then rewrite one function manually to reinforce the pattern.

Prompt Recipes for Common Junior Tasks

These recipes solve problems you'll encounter daily. Copy them, adjust for your context, and build your personal prompt library.

Recipe 1: Explain Unfamiliar Code

Explain this code to me like I'm learning [language/framework]:
@[filename or code snippet]

Include:
- What each major section does (line by line if complex)
- Why this approach was chosen over alternatives
- What concepts I should understand first
- How this integrates with the rest of the codebase
- What happens if inputs are unexpected (null, empty, wrong type)

Use analogies where helpful. Point out any advanced techniques I might not recognize.

Recipe 2: Generate Tests with Learning Focus

Write [testing framework] tests for this function:
@[function]

Requirements:
- Test happy path with typical inputs
- Test edge cases: null, undefined, empty, boundary values
- Test error conditions with appropriate assertions
- Use descriptive test names that explain what's being verified

After generating tests, explain:
- Why you chose these specific test cases
- What other edge cases might exist in production
- How to make these tests more robust

Recipe 3: Debug with Teaching Mode

Help me debug this issue systematically:

Error: [paste error message]
Code: @[relevant file]
Context: [what you were trying to do]

Please:
1. Explain what causes this type of error in general
2. Analyze where in my code this is happening and why
3. Show the fix with before/after comparison
4. Teach me how to prevent this category of error in the future
5. Suggest logging or debugging techniques to diagnose similar issues

Walk me through your reasoning—I want to learn the process, not just get a fix.

Recipe 4: Learn API Patterns

I need to create an API endpoint for [functionality].

First, explain:
- What HTTP method to use and why
- What the request/response should look like
- Common security considerations for this type of endpoint
- Error cases I need to handle

Then generate:
- Route definition following patterns in @routes/
- Request validation
- Error handling middleware
- Tests covering success and error cases

Reference existing patterns in the codebase where applicable.

Recipe 5: Understand Performance

Analyze this code for performance issues:
@[code]

Explain:
- What parts might be slow and why (algorithmic complexity)
- What optimizations would help
- Tradeoffs of each optimization (complexity vs speed)
- When optimization matters vs premature optimization

Show both the current version and optimized version with comments explaining what changed and why.

The Biggest Pitfall: Blind Trust Leading to Silent Bugs

This is the career-limiting mistake junior developers make with AI: trusting output without verification. AI-generated bugs are dangerous because they look correct—clean code, good variable names, passing tests—but hide critical flaws.

Category 1: Hallucinated APIs (non-existent functions)

import { validateEmail } from 'email-validator-pro';

function registerUser(email) {
  if (validateEmail(email)) {
    // registration logic
  }
}

Looks professional. Problem: email-validator-pro doesn't exist. The code compiles if you don't run it, but crashes in production with "Module not found."

Detection: Actually run the code. Try to import the library. Check package.json. If something's unfamiliar, verify it exists before using it.

Category 2: Subtle Logic Errors (edge cases missed)

function getPaginatedUsers(page, pageSize) {
  const offset = page * pageSize;
  return db.query('SELECT * FROM users LIMIT ? OFFSET ?', [pageSize, offset]);
}

Looks correct. Problem: page 1 returns users 10-19 (offset 10), not users 0-9. Off-by-one error because pagination typically uses 1-indexed pages but array math needs 0-indexed.

Should be: const offset = (page - 1) * pageSize;

Category 3: Security Gaps (dangerous defaults)

app.post('/login', async (req, res) => {
  const { email, password } = req.body;
  const user = await db.query('SELECT * FROM users WHERE email = ?', [email]);
  
  if (user && user.password === password) {
    res.json({ token: generateToken(user.id) });
  } else {
    res.status(401).json({ error: 'Invalid credentials' });
  }
});

Looks reasonable. Problems:

  • Plaintext password comparison (passwords should be hashed)
  • No rate limiting (vulnerable to brute force)
  • Returns different errors for "user not found" vs "wrong password" (enables user enumeration)

Your Defensive Checklist: Never Ship Without These

For every piece of AI-generated code, go through this checklist before considering it "done":

Can I explain it?

Walk through the code line by line. If you can't explain what each part does and why, you don't understand it well enough to maintain it.

Does it actually run?

Execute the code locally. Don't assume it works because it looks correct. Test with real inputs.

Are edge cases handled?

Test with: null/undefined inputs, empty arrays/strings, boundary values (0, -1, MAX_INT), unexpected types, network failures, database errors.

Do tests test something meaningful?

AI tests often check that functions return something without verifying it's correct. Verify actual correctness.

Is security considered?

For user input, databases, authentication, files - validate, sanitize, parameterize queries, hash passwords, validate paths.

Have I tested manually?

Automated tests miss integration issues. Click through the UI, trigger error conditions, try to break it.

Would I bet my job on this?

If not, it's not ready. Keep refining until you're confident it's correct.

Building Healthy AI Habits

Daily Practice

  • • Start each feature by writing acceptance criteria before prompting
  • • Review every AI suggestion—never blind copy-paste
  • • Write at least one function manually each day (maintain baseline skills)
  • • Ask "why?" for any code pattern you don't recognize

Weekly Practice

  • • Spend one hour solving problems without AI (LeetCode, small projects)
  • • Review one piece of AI-generated code from last week—still makes sense?
  • • Add successful prompts to your personal library
  • • Identify one thing AI consistently gets wrong and learn to fix it

Monthly Practice

  • • Build one small project start-to-finish manually (no AI)
  • • Compare your manual code to AI-generated version—what did you learn?
  • • Read one technical article/book chapter to stay current on fundamentals
  • • Review your git commits—how much do you understand vs copy-paste?

Key Takeaways

Cursor is a learning accelerator, not a learning replacement

Use it to move faster while building genuine understanding. If you can't implement features without AI, you're dependent, not skilled.

ℹ️
Always understand before shipping

The "explain this code" prompt should be your most-used command. Never ship code you couldn't explain to a teammate.

⚠️
Test skeptically, not hopefully

AI-generated tests often check that code runs without verifying it's correct. Write tests that would catch your logic errors.

Security requires human judgment

AI knows common patterns but doesn't understand security implications. Always apply security checklists to authentication, input handling, and file operations.

ℹ️
Maintain your baseline skills

Manual coding practice isn't busywork—it's insurance for when AI fails or isn't available. Dedicate time to solving problems without assistance.

The next chapter explores how mid-level developers use Cursor differently—moving from learning fundamentals to shipping features fast while maintaining quality. The workflow shifts from "explain and teach me" to "accelerate delivery while enforcing standards." Let's see how.