How We Work with Claude Code
Best practices for AI-assisted development and effective collaboration with Claude Code
How We Work with Claude Code
AI-assisted development is a collaboration. Claude Code is a powerful partner, but like any partner, working together effectively requires understanding what it does well, what needs human judgment, and how to communicate clearly.
The Core Mindset
"AI is a collaborator, not a replacement. Your judgment shapes the outcome."
Claude Code can read, write, and reason about code. But you understand the business context, the user needs, and the long-term implications. The best results come from combining both.
Principles
1. Provide Context Before Asking for Action
The quality of output depends on the quality of input.
The mistake: "Add authentication." Too vague. You get generic code that doesn't fit your patterns.
The principle: Explain what you're building, what constraints exist, and what patterns to follow. Point to existing code as examples.
What to include:
- What are you trying to accomplish?
- What does the existing codebase look like?
- Are there patterns or files to reference?
- What constraints matter? (no new dependencies, keep it simple, etc.)
The payoff: Better context leads to code that fits your project from the start.
2. Plan Before Building
For anything non-trivial, discuss the approach before writing code.
The mistake: Diving straight into implementation. You end up with code that works but doesn't fit your architecture.
The principle: Have a conversation first:
- Describe what you want to achieve
- Ask for options or an approach
- Discuss trade-offs
- Agree on a plan
- Then implement
Why it matters: Rewriting is expensive. Planning catches problems when they're cheap to fix.
3. Work Incrementally
Break large tasks into smaller steps. Verify each step before moving on.
The mistake: Asking for an entire feature at once. You get a massive change that's hard to review, hard to debug, and might be wrong in ways you don't notice.
The principle:
- Request one piece at a time
- Test that piece
- Then move to the next
The benefit:
- Easier to review
- Easier to catch mistakes
- Easier to change direction
4. Verify, Don't Trust
AI can be confidently wrong. Always verify.
The mistake: Assuming the code works because it looks right. Merging without testing. Discovering the bug in production.
The principle:
- Run the code
- Check the build
- Run the tests
- Review the logic
- Test edge cases
What to watch for:
- Does it actually do what you asked?
- Does it follow your patterns?
- Are there obvious bugs or edge cases?
- Is the complexity appropriate?
5. Iterate and Refine
First attempts are rarely perfect. That's expected.
The mistake: Expecting perfect code on the first try. Accepting something that's close enough but not quite right.
The principle: Treat AI output as a starting point:
- "This works, but can you simplify it?"
- "The approach is good, but use X pattern instead"
- "This handles the happy path, but what about Y?"
The relationship: Think of it as pair programming. You guide, refine, and make decisions. Claude Code executes and suggests.
The Collaboration Workflow
┌─────────────────────────────────────────────────────────────┐
│ PLANNING │
│ Discuss requirements, explore options, design approach │
└─────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ IMPLEMENTATION │
│ Write code incrementally, follow patterns, create tests │
└─────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ REVIEW │
│ Check for issues, verify behavior, refactor if needed │
└─────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ DOCUMENT │
│ Update notes, commit with clear message │
└─────────────────────────────────────────────────────────────┘
Decision Framework
"When should I use AI assistance?"
Good fit:
- Understanding existing code ("How does this work?")
- Following established patterns ("Create another one like X")
- Routine tasks (boilerplate, tests, migrations)
- Exploration ("What are the options for X?")
- Bug investigation ("Why might this be failing?")
- Refactoring ("Simplify this while keeping the behavior")
Needs human judgment:
- Architecture decisions (AI can suggest, you decide)
- Security-sensitive code (always review carefully)
- Business logic validation (does this match what users need?)
- Performance requirements (AI doesn't know your constraints)
- UX decisions (AI doesn't see your users)
Not a good fit:
- Tasks requiring real-time information
- Decisions that need organizational context
- Creative work that must be distinctly human
"How do I give good instructions?"
Be specific about scope:
- Not: "Add a feature"
- Better: "Add a CSV export button to the keywords table that exports keyword, rank, and date columns"
Point to examples:
- "Follow the pattern in /api/users/route.ts"
- "Use the existing Button component from /components/ui"
State constraints:
- "No new dependencies"
- "Keep it under 100 lines"
- "Must work with existing auth"
Clarify what you want:
- "Just the implementation" vs "Explain your approach first"
- "Step by step, wait for confirmation" vs "Complete the whole thing"
"When should I ask for an explanation?"
Ask before implementation when:
- You're not sure of the best approach
- The task is complex or unfamiliar
- Multiple valid approaches exist
- Security or performance matters
Ask after implementation when:
- The code does something you don't understand
- You want to learn the pattern
- You're deciding whether to keep the approach
The value: Understanding the "why" helps you maintain the code later. Don't accept code you don't understand.
"How do I handle mistakes?"
When the output is wrong:
- Point out specifically what's wrong
- Explain what you expected instead
- Provide more context if needed
When you're stuck in a loop:
- Step back and reframe the problem
- Provide more concrete constraints
- Try a different approach entirely
When the direction is wrong:
- Stop and reconsider before more code is written
- It's cheaper to restart than to fix a bad foundation
Common Mistakes
Accepting Without Understanding
Taking code you don't understand because it seems to work.
Signs: You can't explain what the code does. You can't modify it confidently. Bugs are mysterious.
The fix: Ask for explanations. If you don't understand it, either learn it or simplify it.
Over-Scoping Requests
Asking for too much at once.
Signs: Massive changes that are hard to review. Errors you don't notice. Lost in the complexity.
The fix: Break it down. One feature at a time. One file at a time if needed.
Under-Specifying Intent
Being too vague about what you want.
Signs: Getting something different from what you expected. Lots of back-and-forth. Generic solutions that don't fit.
The fix: Be specific. Provide examples. State constraints. Point to existing patterns.
Skipping Verification
Merging without testing because the code looks right.
Signs: Bugs that basic testing would have caught. Broken builds. Features that don't work as expected.
The fix: Always run the code. Check the build. Run the tests. Try it yourself.
Fighting the Tool
Trying to make AI do something it's not good at.
Signs: Frustration. Repeated attempts with poor results. Feeling like AI is "dumb."
The fix: Recognize what AI does well (following patterns, generating boilerplate, explaining code) and what needs human judgment (design decisions, business logic, user experience).
How to Evaluate AI-Assisted Work
Your collaboration is working if:
- You understand all the code being written
- The code follows your existing patterns
- You're catching issues before they're merged
- You're spending less time on routine tasks
- You're learning from the explanations
Your collaboration needs adjustment if:
- You're accepting code you don't understand
- The code doesn't fit your project's style
- Bugs are slipping through
- You're doing extensive rewrites
- You're spending more time fixing AI output than writing yourself
What AI Does Well
| Task | Effectiveness |
|---|---|
| Understanding existing code | Excellent |
| Following established patterns | Excellent |
| Multi-file refactoring | Very Good |
| Bug investigation | Very Good |
| Writing new features (with guidance) | Very Good |
| Code review | Good |
| Architecture design | Good (with guidance) |
| Writing tests | Good |
What Requires Human Judgment
- Final architecture decisions
- Security-sensitive code review
- Business logic validation
- Production deployment decisions
- Performance requirements
- User experience decisions
- Organizational and political context
- Ethical considerations
Tips for Better Results
Do
- Provide context about the project
- Reference existing patterns
- Ask for explanations before implementations
- Iterate and refine solutions
- Verify changes work before moving on
- Break complex tasks into steps
Don't
- Ask to implement without understanding
- Accept first solution without review
- Skip testing
- Ignore warnings about potential issues
- Rush through complex changes
- Fight against the tool's limitations
Quick Reference
| I want to... | Approach |
|---|---|
| Plan a feature | "Help me plan X. What's the best approach given Y?" |
| Understand code | "Explain how X works" |
| Fix a bug | "Investigate X. What's the root cause?" |
| Refactor | "Simplify this while keeping the same behavior" |
| Review code | "Review for bugs, security, and performance" |
| Write tests | "Add tests for X covering edge cases" |
| Learn codebase | "Give me an overview of this project" |