AI Slop is a Skill Issue
Why AI-generated code is unmaintainable — and how to fix it. Hint: it's not the AI.
AI Slop is a Skill Issue (On Your Part)
Why AI-generated code is unmaintainable — and how to fix it
Everyone's scared of AI-generated spaghetti. Vibe-coded prototypes that work once, then crumble when you try to extend them. Code that passes the demo but fails the PR review.
Here's the uncomfortable truth: the problem isn't the AI. It's you.
Or more specifically, it's how you're using it.
Why AI Produces Slop
Give an LLM a blank canvas and it will paint whatever it wants. That's the problem.
Without constraints, agents:
- Reinvent architecture decisions every session
- Hallucinate data shapes that don't match your API
- Create one-off patterns instead of reusing existing code
- Ignore your testing strategy (if you even have one)
The output "works" in the sense that it runs. But it's not maintainable. It doesn't fit your codebase. It can't be extended without rewriting it.
This isn't AI failure. This is constraint failure.
The Fix: Opinionated Structure Before You Start
The engineers getting real value from AI coding aren't smarter. They're more structured.
Here's what changes when you give AI proper guardrails:
1. Pre-decided Architecture
Don't let the AI decide where code lives. Tell it:
# CLAUDE.md
## Project Structure
- Business logic goes in `@repo/service`
- Zod schemas go in `@repo/entities`
- React components go in `@repo/ui`
- tRPC routers handle API layerNow the agent doesn't have to think about architecture. It follows rules.
2. Type Boundaries with Zod
Agents hallucinate. Types don't.
// Define the shape once
export const UserSchema = z.object({
id: z.string().uuid(),
email: z.string().email(),
name: z.string().min(1).max(100),
});
// Agent CANNOT produce invalid users
// TypeScript + Zod = runtime + compile-time safetyWhen your schemas are defined upfront, the agent can't invent new fields or ignore validation. The compiler catches mistakes.
3. State Machines for Workflows
Complex flows are where agents really go off the rails. XState fixes this:
const onboardingMachine = setup({
// States are EXPLICIT
// Transitions are EXPLICIT
// Guards enforce business rules
}).createMachine({
states: {
emailStep: { on: { NEXT: 'nameStep' } },
nameStep: { on: { NEXT: 'planStep', BACK: 'emailStep' } },
// ...
},
});The agent can't produce impossible state transitions because the machine won't allow them. The type system enforces valid events.
4. Strict Linting as a Safety Net
Your ESLint config is your code reviewer that never sleeps:
// eslint.config.mjs
rules: {
'@typescript-eslint/no-explicit-any': 'error',
'@typescript-eslint/no-floating-promises': 'error',
'@typescript-eslint/no-unsafe-assignment': 'error',
// ...
}Agent produces an any type? Lint fails. Forgets to await a promise? Lint fails. Uses unsafe member access? Lint fails.
The agent learns fast when every mistake gets immediate feedback.
5. Package Isolation
Monorepos aren't just for scale. They're for focus.
packages/
├── @repo/entities # Just schemas
├── @repo/database # Just Prisma
├── @repo/service # Just business logic
├── @repo/forms # Just form utilities
└── @repo/state # Just state machinesWhen you tell an agent "work on @repo/service only," it can't accidentally break your UI. Each package has one job. Agents work better with bounded contexts.
The Same Feature: With vs Without Structure
Without structure:
"Build a user signup flow"
Result: 400 lines in one file, custom validation scattered everywhere, state managed with useState booleans, works but untestable.
With structure:
"Add signup to @repo/service using SignupFormSchema from @repo/entities. Use the existing onboardingMachine pattern from @repo/state."
Result: 50 lines across 3 focused files, schema-driven validation, type-safe state machine, fully testable.
Same AI. Same prompt complexity. Radically different output.
The Real Skill
Using AI effectively isn't about prompt engineering tricks. It's about environment engineering.
- Define your architecture before you start coding
- Use types as contracts, not suggestions
- Make impossible states unrepresentable
- Let the linter be the bad cop
- Isolate concerns so agents can focus
The agents don't need to be smarter. Your codebase needs to be stricter.
I'm building an open-source boilerplate that bakes these patterns in. It's the structure I use to ship production features with AI in hours, not days. Link coming soon.
Tags: #ai #software-engineering #typescript #dx
Publish to: LinkedIn, personal blog, DEV.to
Want to work this way?
We help companies ship quality software at speeds they didn't think were possible.
Tell us about your project