codeintelligently
Back to posts
AI & Code Quality

Vibe Coding Is Real and It's Destroying Your Codebase

Vaibhav Verma
7 min read
aivibe-codingcode-qualitydeveloper-habitsengineering-culturebest-practices

Vibe Coding Is Real and It's Destroying Your Codebase

Andrej Karpathy coined the term "vibe coding" in early 2025. It describes a development style where you prompt an AI, glance at the output, and accept it if it "feels right." No deep review. No testing. Just vibes.

I laughed when I first heard the term. Six months later, I found myself doing exactly this on a Friday afternoon. The code worked. The PR got approved. And three weeks later, that code caused a production incident that took 4 hours to resolve.

Vibe coding is seductive because it works 80% of the time. That other 20% is where your codebase dies.

What Vibe Coding Actually Looks Like

It's not about being lazy. It's about a workflow that subtly encourages shallow engagement with generated code.

The vibe coding loop:

  1. Write a prompt describing what you want
  2. AI generates 50-100 lines of code
  3. You scan it for about 30 seconds
  4. It "looks right" so you accept it
  5. You run the app, it works
  6. You commit and move on

Compare that to the deliberate coding loop:

  1. Write a prompt describing what you want
  2. AI generates 50-100 lines of code
  3. You read each function and understand the logic
  4. You check it against existing patterns in the codebase
  5. You test edge cases manually
  6. You modify what doesn't fit
  7. You commit with confidence

The difference between these loops is about 15 minutes per AI interaction. Over a day of heavy AI usage (20-30 interactions), that's 5-7 extra hours. Most developers choose the fast loop because they're under pressure to ship.

The Three Stages of Vibe Coding Damage

Stage 1: Subtle Inconsistencies (Month 1-3)

In this stage, the code works but doesn't match your team's conventions. Different error handling patterns appear. Import styles vary. Some files use async/await, others use .then() chains. Nobody notices because each file is internally consistent.

Detection signal: New developers start asking "which pattern should I follow?"

Stage 2: Architectural Drift (Month 3-6)

AI-generated code starts introducing mini-architectures within your architecture. You'll find a file that implements its own caching layer because the AI didn't know your app already has one. Or a utility function that reimplements something from your shared library.

Detection signal: Bundle size grows without corresponding feature growth. Duplicated logic appears in code search.

Stage 3: Unmaintainable Complexity (Month 6+)

By this point, the codebase has multiple competing patterns, redundant abstractions, and code that nobody fully understands. Changing one thing breaks unexpected things. Debugging takes 3x longer because you're tracing through unfamiliar patterns.

Detection signal: Sprint velocity drops despite shipping fewer features. Bug count increases.

Real Examples from Real Codebases

Here's a file I found during an audit. It was vibe-coded and nobody caught the issues for 4 months:

typescript
// This looks fine on first glance
export async function syncUserData(userId: string) {
  const user = await fetch(`/api/users/${userId}`);
  const userData = await user.json();

  const preferences = await fetch(`/api/users/${userId}/preferences`);
  const prefData = await preferences.json();

  const notifications = await fetch(`/api/users/${userId}/notifications`);
  const notifData = await notifications.json();

  await Promise.all([
    db.user.update({ where: { id: userId }, data: userData }),
    db.preferences.upsert({
      where: { userId },
      create: { userId, ...prefData },
      update: prefData,
    }),
    db.notifications.updateMany({
      where: { userId },
      data: { read: true },
    }),
  ]);

  return { user: userData, preferences: prefData, notifications: notifData };
}

Problems a deliberate review would have caught:

  1. Three sequential HTTP requests that could be parallelized
  2. No error handling on any fetch call
  3. No response status checking (user.ok?)
  4. The notification update marks ALL as read, not just the fetched ones
  5. Using fetch instead of the team's API client that handles auth tokens
  6. The function is in a file called utils.ts (it's business logic, not a utility)

Every one of these is a "vibe coding" artifact. The code ran. The tests (also vibe-coded) passed. Nobody looked closely because it was clean and readable.

The Vibe Coding Assessment

Score your team on each item (1 = always, 5 = never):

Behavior Score
Accepting AI code after a quick visual scan __
Running the app as the only form of testing __
Not checking AI code against existing patterns __
Committing AI code without modifying it __
Generating tests with the same AI session as the code __
Not reading AI-generated code line by line __

Scoring:

  • 6-12: Your team has a vibe coding problem. Immediate intervention needed.
  • 13-20: Some vibe coding habits. Address before they become cultural.
  • 21-30: Your team is deliberate about AI code. Keep it up.

Breaking the Vibe Coding Habit

The 5-Minute Rule

After AI generates code, set a timer for 5 minutes. Spend that time reading every line and asking "why this approach?" If you can't answer for any section, you don't understand it well enough to ship it.

The Modification Requirement

Require at least one meaningful modification to every AI-generated block before accepting it. Not a variable rename. A structural change that proves you understood the code. If you can't find anything to improve, you probably didn't look hard enough.

The Explanation Test

Before committing, explain the AI-generated code to yourself (or a rubber duck) without looking at it. If you can't, you're vibe coding.

The Pattern Check

Before accepting AI code, open one similar file in your codebase and diff the approaches mentally. If they differ, the AI code needs to be modified to match your conventions.

The Contrarian Take

Here's what most people get wrong about vibe coding: the solution isn't to ban AI tools or add more process. It's to change the definition of done.

Right now, "done" means "it works." Change it to "it works and I can explain why every line exists." That single shift eliminates vibe coding without slowing down the team significantly.

Vibe coding feels productive. It feels like you're shipping fast. But you're actually borrowing against your future self's time. And that loan comes due with interest.

$ ls ./related

Explore by topic