March 23, 2026
Vibe Coding or Lucky Coding?
AI democratized execution. But it did not democratize judgment. Lucky code is just guesswork with a modern interface.
Vibe coding became a buzzword. And like every buzzword, it lost precision along the way. But the story starts before that.
When language models exploded onto the scene, prompt engineering emerged as a methodical approach to guiding AI with clarity, context, and intent. It was briefly considered one of the hottest professions of 2023. Two years later, it was practically obsolete.
The market did not evolve. It traded method for vibes.
In February 2025, Andrej Karpathy, OpenAI co-founder, coined the term vibe coding: "give the vibes to the AI, embrace the exponentials, and forget the code exists." The idea was for quick, throwaway projects. The market turned it into a production methodology.
And created two profiles that almost nobody names.
There is what I call lucky code: you write a prompt, hope luck helps you out, and call the result productivity. It works sometimes. By chance. And when it stops working, nobody knows exactly why, so the next step is to try another random prompt and hope again.
And there is structured, professional use: before any prompt, you create a PRD, a Product Requirements Document, that defines the product's purpose, features, and behavior. That is what guides the AI within a clear line of reasoning. When something needs adjusting, you update the parameters. You do not pray for a new prompt hoping for a different miracle.
Interestingly, in February 2026, exactly one year later, Karpathy himself came back with a new term for that professional use: agentic engineering. "Agentic because you orchestrate agents, engineering to emphasize that there is art, science, and expertise in this."
The market went from prompt engineering to vibe coding and came back to engineering. Under a different name.
But there is a problem that predates any term.
Many people do not even start with an idea of their own. They do not research the problem. They do not validate whether there is real value in what they want to build. They do not test with real users. And then they have AI execute, using as reference that celebrated benchmarking from corporate presentations — which in practice means copying the competitor pixel by pixel.
The original product you are replicating already has a full team working on the problems you cannot see from the interface. You are copying the facade and inheriting all the structural defects they are already racing to fix.
AI democratized execution. But it did not democratize judgment.
Lucky code is just guesswork with a modern interface.