Avoiding AI Slop

How to get quality code from AI assistants instead of bloated, generic output

Avoiding AI Slop

"AI slop" is the bloated, over-engineered, generic code that AI assistants produce when given vague prompts. It's syntactically correct but architecturally wrong—full of unnecessary abstractions, verbose comments, and patterns that don't fit your codebase. This guide covers how to get code you'd actually want to ship.

Core principle: AI assistants reflect the quality of your input. Vague prompts produce slop. Precise prompts produce precise code.

What AI Slop Looks Like

The Signs

SymptomExample
Over-abstractionCreating a UserServiceFactory for a simple CRUD app
Verbose comments// This function returns the user above getUser()
Unnecessary patternsAbstract classes, interfaces for single implementations
Generic namingDataProcessor, HelperUtils, ServiceManager
Kitchen-sink importsImporting entire libraries for one function
Defensive overkillTry-catch blocks around code that can't fail
Feature creepAdding pagination, caching, retry logic you didn't ask for

Why It Happens

AI assistants optimize for:

  • Completeness: They try to handle every edge case
  • Safety: They add validation and error handling everywhere
  • Generality: They build for hypothetical future requirements
  • Impressiveness: More code looks like more work

None of these align with "simple code that solves the actual problem."

Principles

1. Context Is Everything

The mistake: Asking "write a function to fetch users" with no context.

The fix: Give the AI your actual environment.

What to include:

  • Existing patterns in your codebase
  • Libraries you're already using
  • Your naming conventions
  • The specific use case

How we do it: Our CLAUDE.md file tells Claude Code about our stack, patterns, and preferences before it writes anything.

2. Constrain the Solution Space

The mistake: Open-ended requests that invite over-engineering.

The fix: Be explicit about what you don't want.

Effective constraints:

  • "No new dependencies"
  • "Follow the pattern in [existing file]"
  • "Keep it under 50 lines"
  • "No abstractions—this is a one-off"
  • "Just the happy path for now"

3. Ask for Less, Not More

The mistake: "Build me a complete authentication system."

The fix: Break it into small, reviewable pieces.

Better approach:

  1. "Add a login form that calls /api/login"
  2. "Store the session token in a cookie"
  3. "Add a useAuth hook that checks login status"

Each piece is reviewable. You catch slop early.

4. Review Like a Skeptic

The mistake: Accepting AI output because it runs.

The fix: Ask "would I write this?"

Questions to ask:

  • Is every line necessary?
  • Do I understand every abstraction?
  • Does this match our existing patterns?
  • Would a new team member understand this?
  • Is there a simpler way?

5. Refactor Immediately

The mistake: Planning to clean up AI code "later."

The fix: Simplify before committing.

Common simplifications:

  • Inline single-use functions
  • Remove unnecessary type definitions
  • Delete comments that restate the code
  • Flatten unnecessary nesting
  • Remove unused error handling

Decision Framework

When to accept AI output as-is

  • It matches your existing patterns exactly
  • It's simple and you understand every line
  • It solves the specific problem without extras
  • You would have written something similar

When to heavily edit AI output

  • It introduces new patterns or abstractions
  • It adds features you didn't request
  • It's longer than expected
  • The naming feels generic
  • You can't immediately explain what each part does

When to reject and re-prompt

  • It misunderstood the requirement
  • It used wrong libraries or patterns
  • The approach is fundamentally over-engineered
  • It would take longer to fix than to guide a rewrite

Prompting Techniques

Be Specific About Style

Vague: "Write a function to process user data"

Specific: "Write a function called formatUserForDisplay that takes a User object and returns { name: string, joinedDate: string }. Use date-fns format(). No error handling needed—input is always valid."

Reference Existing Code

Vague: "Add a new API endpoint"

Specific: "Add a POST /api/jobs endpoint following the same pattern as /api/users in app/api/users/route.ts. Use Zod for validation. Return 201 on success."

Specify What to Omit

Without constraints:

"Create a form component for user registration"
→ Gets: Loading states, error boundaries, accessibility attributes,
   form validation library, custom hooks, TypeScript generics

With constraints:

"Create a simple registration form with email/password fields.
No loading states. No validation library—just basic HTML required attributes.
No custom hooks. Keep it under 40 lines."
→ Gets: What you actually need

Ask for Incremental Changes

Monolithic request: "Add dark mode support to the application"

Incremental approach:

  1. "Add a ThemeContext with 'light' and 'dark' values"
  2. "Add a toggle button in the header that switches theme"
  3. "Update the Button component to use theme-aware colors"

Each step is small enough to review properly.

Common Mistakes

Trusting "best practices"

Signs: AI adds patterns "because it's best practice."

Reality: Best practices are context-dependent. A startup MVP doesn't need enterprise patterns.

Fix: Ask "best practice for what scale and context?" Challenge any abstraction.

Accepting verbose documentation

Signs: Every function has a JSDoc block restating what the code does.

Reality: Good code is self-documenting. Comments should explain why, not what.

Fix: Delete comments that don't add information. Keep comments that explain non-obvious decisions.

Letting AI choose the architecture

Signs: AI introduces new patterns, file structures, or abstractions.

Reality: Architecture decisions should be deliberate, not emergent from AI suggestions.

Fix: Make architecture decisions yourself. Use AI for implementation within your architecture.

Not reading generated code

Signs: Copy-paste without reading. "It works" is the only review.

Reality: AI code often has subtle issues—wrong error handling, inefficient patterns, security gaps.

Fix: Read every line. Understand every line. If you can't explain it, simplify it.

Evaluation Checklist

Your AI workflow is healthy if:

  • You edit most AI output before committing
  • Generated code matches your existing patterns
  • You can explain every abstraction in the code
  • PRs with AI-generated code aren't larger than manual ones
  • New team members can understand AI-generated code

Your AI workflow needs work if:

  • You accept AI output without changes
  • Codebase has inconsistent patterns
  • Some files feel "AI-written" vs others
  • You're not sure what some AI-generated code does
  • Technical debt is accumulating faster

Quick Reference

Prompt Formula

[Specific task] + [Existing pattern reference] + [Constraints] + [What to omit]

Example: "Add a deleteJob function to lib/jobs.ts following the pattern of createJob. Use Supabase client. No soft delete—hard delete only. No confirmation dialog—that's handled by the caller."

Red Flags in AI Output

See ThisDo This
New abstractionAsk: "Do I need this?"
// TODO commentsRemove or implement now
Generic namesRename to domain-specific
Unused parametersRemove them
Complex error handlingSimplify to actual failure modes
Multiple files for simple featureConsolidate

The 30-Second Test

After AI generates code, ask:

  1. Can I explain this to a teammate in 30 seconds?
  2. Would I be comfortable debugging this at 2am?
  3. Does this match how we do things?

If any answer is "no," simplify before committing.