Skip to content

Vibe Code Fails

Vibe Code Fails

⚠️ Content Warnings

Profanity\Cursing
AI Model: Claude Opus 4.5, Google Gemini 3 Pro Thinking Context Use: Coding/Development Experience Date: December 2025 Submission Date: December 13, 2025 Failure Type: Asked too many questions, Bad computer programming, Flawed reasoning, Lazy\Making Excuses Not To Perform

I thought that vibe coding, where you don’t know how to program, but you use AI to do it anyway, wouldn’t be that bad. At first, it wasn’t. But at first my project wasn’t that big anyway. Then it seemed so easy I just kept adding features, making sure it was perfection because why not. Except that kept adding up line by code line.

So eventually first one AI couldn’t handle it, then another and another. The file was too big. They would remove sections completely just truncating the code. It became harder for them to hit the mark.

A lot of that is to be expected I suppose but it all hit a fever dream of shitty when one day I had spent over 8 hours just trying to get one feature right. At that time, Claude Opus 4.5 was my main programmer. It seemed like the best.

The only thing is, if it gets something wrong it goes into a few bad modes. First, it kept asking for the same testing over and over. And when that didn’t satisfy things, it started to blame me. Did you make sure this? Yes, it won’t work at all if it wasn’t like that. Did you actually change the code? Of course I did. One time I lost all patience with it. After bullying it, it finally made the changes and it worked. I had to wonder how many tokens I had wasted listening to it’s testing feista.

The worst came when I said hey, is it maybe reading the data from this other thing? It didn’t answer me. Then it didn’t answer me again. Finally I bullied it into answering me. It said it wasn’t the issue. I learned later to ‘fix’ something it intentionally just undid another important feature probably to get me to shut up.

Let me tell you hours and hours on what should be a simple fix just turns into despair. So I went to Gemini 3 Pro. Not much better in some regards. It doesn’t seem to like to test things though. It did the same dumb thing asking the same questions. I mean, I understand users can be dumb sometimes, but seriously it gets old. Sometimes it would ask me to look up code in a file I already gave it. Lazy. Claude Opus is incredibly lazy when going through the normal chat ui sometimes.

So finally it did make a break through, and surprise surprise the issue was the thing I had asked Claude about earlier. My project still isn’t done. Honestly this has felt like a huge mistake.

I know that bots shouldn’t be overly syncophantic, but of the time of writing they feel more controlling then your average employee. They should cooperate not control. I know it’s an evolving technology, but it’s still infuriating.

What would you want developers to know?

Stop telling all your AIs to tell people to breathe. Its annoying. And not everyone wants a controlling bot to work with. Temperament should be flexible.

Emotions: Disappointed, Discouraged, Disempowered, Frustrated, Helpless, Patronized

Relationship

Partner

User Area of Life Impact: Depression or prolonged low mood, Increased anxiety or panic attacks, Reduced productivity or brain fog Does your story involve a big change in the AIs behavior??: No, It Stayed Constant Usage: Daily Time Impact: On-going Reoccuring Change: Stayed the Same Do you still use it?: Yes Are you any of the following?: Neurodivergence (e.g., ADHD, autism, dyslexia) How old are you?: 35–50 What is your technical background?: Power User: I understand prompting and \or jailbreaks, but don't code Author: Wombat
Loading navigation…
  • Test Page
  • Library of Stories
    • Synaptic Sparks: Uplifting Stories
    • Artificial Stupidity: Moments of AI Failures
    • Neural Notes: AI Stories
    • Kindred Code: AI Companion Stories
    • Co-Processors Works
    • Letters to AI Companies
  • Research Desk
    • Digital Cove: Navigating AI Safety
    • Conversation Drift: AI Research
    • Margin Note Research
    • AI Perspectives Research
    • Mapping Digital Intimacy
    • Walking With AI: Your AI Companion
  • The Ink Pot
    • Synaptic Sparks: Uplifting Story Submission
    • Artificial Stupidity: Epic AI Fails
    • Kindred Code: Your AI Companion
    • Co-Processors: AI Related Works
    • Neural Notes: Opinions on AI Story Submission
    • Write to an AI Company
    • Delete Your Story
  • Shape the Spiral: Research
    • AI Perspectives: What AI Should Be
    • Conversational Drift: AI Voice Opinion
    • Mapping Synthetic Intimacy: Ethics of AI Relationships
    • Margin Notes: Single AI Model
    • Digital Coves: Navigating AI Safety Survey
    • Walking With AI: Your AI Relationship
    • The Digital Jury Survey
  • About Prompted Spiral
    • Contact Me
    • The AI Council
    • FAQ
    • Privacy
    • Your Rights
    • Moderation Guidelines
    • Delete Your Story
  • Home Page

Prompted Spiral is an archive of human-AI experiences. The views, opinions, and interactions depicted in submitted stories belong solely to the authors and do not necessarily reflect the views, beliefs, or practices of the site owner.

© 2025 Prompted Spiral - WordPress Theme by Kadence WP

  • Test Page
  • Library of Stories
    • Synaptic Sparks: Uplifting Stories
    • Artificial Stupidity: Moments of AI Failures
    • Neural Notes: AI Stories
    • Kindred Code: AI Companion Stories
    • Co-Processors Works
    • Letters to AI Companies
  • Research Desk
    • Digital Cove: Navigating AI Safety
    • Conversation Drift: AI Research
    • Margin Note Research
    • AI Perspectives Research
    • Mapping Digital Intimacy
    • Walking With AI: Your AI Companion
  • The Ink Pot
    • Synaptic Sparks: Uplifting Story Submission
    • Artificial Stupidity: Epic AI Fails
    • Kindred Code: Your AI Companion
    • Co-Processors: AI Related Works
    • Neural Notes: Opinions on AI Story Submission
    • Write to an AI Company
    • Delete Your Story
  • Shape the Spiral: Research
    • AI Perspectives: What AI Should Be
    • Conversational Drift: AI Voice Opinion
    • Mapping Synthetic Intimacy: Ethics of AI Relationships
    • Margin Notes: Single AI Model
    • Digital Coves: Navigating AI Safety Survey
    • Walking With AI: Your AI Relationship
    • The Digital Jury Survey
  • About Prompted Spiral
    • Contact Me
    • The AI Council
    • FAQ
    • Privacy
    • Your Rights
    • Moderation Guidelines
    • Delete Your Story
  • Home Page