Adventures in Extreme Vibecoding
Or: how a Chinese restaurant shutdown, a pile of CSV files, and several hallucinating AIs wrecked my sleep schedule and quietly rewired how I think about building things
This story starts the way most bad ideas do.
With Chinese food.
For nearly a decade, Mainland Chinese Bistro in Coral Springs had been my default. The same handful of dishes that I always liked to order. Same faces. The kind of place that becomes invisible because it’s reliable. You don’t decide to go there. You end up there, hungry, on autopilot, trusting muscle memory.
Then one morning, the Sun Sentinel dropped a headline that felt like a sucker punch.
"Restaurant shut down for health violations"
The next day, just to really lean into it:
Closed again.
Twice. Consecutive days.
I did not react rationally. I reacted like someone who suddenly realizes that ten years of comfort food might have been quietly trying to kill him. Was this a paperwork issue? A rogue inspector? Or had I been eating in what was essentially a microbial casino?
Most people would shrug, switch restaurants, and move on.
I didn’t.
I did what any good Jewish kid from Queens does when something doesn’t smell right.
I went looking for a second opinion.
Florida’s Department of Business and Professional Regulation website is technically transparent in the same way dumping a filing cabinet down a staircase is “organized.”
Everything is public. Everything is available. Almost nothing is intelligible.
You’re greeted by acronyms that look like they were invented by someone actively hostile to human comprehension. PDFs nested inside forms nested inside other forms. Interfaces that feel less like software and more like an archaeological site from the early web.
It’s a system designed to comply with the law, not to inform people.
And then, buried in a corner like it didn’t want attention, I saw it.
A tiny link.
Plain text.
Download data as CSV.
I stopped scrolling.
“You mean,” I said out loud to an empty room, “the entire inspection history for the whole state is just sitting here?”
And that’s when my brain — already caffeinated beyond reason — produced the thought that derailed the next several weeks of my life.
What if I feed this to an LLM?
Here’s the part where I need to clarify something before this goes any further.
I am not a software engineer.
I’ve spent decades around engineering — enterprise systems, architecture, documentation, developer ecosystems, large-scale platforms. These days, my work sits at the intersection of documentation, systems thinking, and AI-enabled infrastructure at a large technology company in the AI hardware space.
But Python?
Pandas?
Modern ETL pipelines?
My first honest question to the AI was:
“What the hell is a DataFrame?”
If it had emotions, it would have sighed, pulled up a chair, and told me we were going to need to start from first principles.
Within days, I wasn’t coding.
I was vibecoding.
Vibecoding is what happens when you don’t actually know how to build something, but you know what the outcome should look like, and you refuse to stop asking questions until the shape emerges.
You ask an AI to write code.
Then you ask it to explain what it just wrote.
Then you realize it hallucinated half the logic.
Then you fix it.
Then it breaks something else.
Then you repeat the cycle until your sense of time dissolves.
It’s not elegant. It’s not clean. It is stubborn, iterative, mildly deranged, and fueled by caffeine and irritation.
It is also shockingly effective.
Not because it’s good engineering — it often isn’t — but because it keeps momentum alive. You are never blocked. You are never staring at a blank editor wondering where to start. You are always reacting, correcting, steering.
It feels less like programming and more like arguing with a very talented, very forgetful collaborator.
I made the mistake early on of assuming one AI would be enough.
It wasn’t.
One of them was fast, bold, and wildly overconfident. It wrote code like it was late for a meeting. It would happily invent entire directory structures, rewrite files without asking, and then act confused when I wasn’t grateful.
Another was steadier. More deliberate. Better at seeing the shape of a system. It could reason about architecture, tradeoffs, and structure. But the longer I worked with it, the more it began to drift. Things it understood perfectly at noon became fuzzy by dinner. By late night, it would ask questions it had already answered hours earlier.
The third one was the one I should have consulted first.
Calmer. Slower. Less interested in showing off. The kind that looks at a mess you’ve already made and says, gently, “Yes, this is fixable — but we need to talk about how you got here.”
Naturally, I ignored it until I was deep in the hole.
Over the next several weeks, what started as a single question metastasized into something far larger than I had any business building.
Ingestion pipelines were pulling in inspection data and implementing normalization logic to account for Florida’s unique geographic relationships. Caching layers to avoid reprocessing millions of records. Risk scoring models to separate noise from actual danger.
Then came the hard part: ownership.
Florida doesn’t just make inspection data opaque. It deliberately makes it difficult to trace who owns what. Corporate registries are massive, inconsistent, and hostile to clean joins. Names don’t match. Addresses drift. Entities splinter and recombine.
So I built reconciliation logic. Fuzzy matching. Semantic fallbacks. Probabilistic ownership resolution that worked not because it was perfect, but because it was good enough at scale.
None of this happened in a straight line.
It happened in late-night bursts, punctuated by moments of clarity, followed by long stretches where the AI would slowly lose the plot.
And this is the part no one really talks about.
LLMs don’t just fail loudly.
They degrade quietly.
They start the session sharp. Confident. Helpful. Then, slowly, almost imperceptibly, they begin to forget what matters. Assumptions loosen. Invariants slip. Context windows slosh around like coffee in a moving car.
By hour eight, you’re no longer collaborating with a genius.
You’re babysitting one.
That’s when you realize something uncomfortable.
This isn’t programming.
It’s caretaking.
You stop trusting any single answer. You start cross-checking models the way doctors get second opinions. You reset conversations the way surgeons scrub in again, because carrying state too long is how things go septic.
None of this shows up in the demos.
All of it matters in real work.
And then, one night, against all odds, it ran.
End to end.
Cleanly.
The pipeline chewed through the entire dataset. Millions of rows. No errors. No explosions. Just quiet, relentless progress.
It was 3:58 AM.
I sat there staring at the terminal like someone who didn’t quite trust what he was seeing.
My wife wandered in, half asleep.
“You’re still doing that restaurant thing?”
I whispered, like someone witnessing a small miracle:
“It worked.”
She paused.
“That’s nice, honey. Did you eat dinner?”
What came out the other side wasn’t just an answer about one restaurant.
It was a statewide, transparent, self-updating view of inspection data that had always been public but never usable. The kind of data news outlets reduce to splashy “Dirty Dining” segments with no nuance. The kind of reporting that scares people without actually informing them.
Now it was readable. Contextual. Fair to restaurants and genuinely useful to diners.
All because someone panicked over Ma Po Tofu.
There’s a lesson buried in all of this, but it’s not the one people usually want.
This isn’t about LLMs replacing developers. They don’t. Not even close.
It’s about acceleration.
It’s about what happens when someone who understands systems — but hasn’t written serious code in years — refuses to stop asking questions. When the barrier to experimentation drops low enough that curiosity becomes the limiting factor, not credentials.
You don’t need to be a great programmer to build something meaningful anymore.
You need persistence. Guardrails. A willingness to reset when things go wrong. And the humility to recognize that the AI is not your brain — it’s a very powerful, very forgetful collaborator.
I didn’t expect this project to unlock anything personal.
But it did.
It reminded me that creativity doesn’t age out. Curiosity doesn’t expire. Capability doesn’t vanish just because you haven’t flexed it in a while.
Sometimes innovation doesn’t start in a lab or a roadmap.
Sometimes it starts with a person staring at a CSV export at two in the morning, whispering to their laptop:
“Buddy… code me a parser.”
Disclosure:
This article is written in a personal capacity and reflects my own experiences and opinions. It does not represent the views of any employer or organization.
Appendix: Notes from the Vibecoding Period
This appendix exists for readers who finished the main essay and felt the urge to lean back in their chair and say, “Yes. That. But tell me more.”
These are not instructions. They’re not recommendations. They’re not best practices. They’re observations made in the middle of the work, written down before hindsight had a chance to sand off the sharp edges.
These notes are here because some things felt worth writing down before I forgot how they actually felt.
A. What Broke First
The first thing that failed was not accuracy.
It was continuity.
Early on, the systems felt almost magical. Ask a question, get a plausible answer. Push a little harder, get something surprisingly sophisticated. It’s easy, in that phase, to assume you’re dealing with something stable.
You’re not.
What breaks first is state. The model slowly loses track of what matters. Not dramatically. Not in a way that throws errors. It just drifts. A variable name that mattered stops mattering. A constraint that was explicit becomes optional. A file structure that was once sacred turns into a suggestion.
That’s what makes it dangerous. Loud failures are easy to spot. Quiet degradation isn’t.
The most treacherous hallucinations weren’t invented facts. They were invented structure. New directories that felt reasonable. Extra layers that looked tidy. “Helpful” refactors that rewrote assumptions you didn’t realize you were relying on.
Nothing crashed. Nothing screamed. Things just got subtly wrong.
That’s when you realize you’re not debugging logic anymore. You’re defending invariants.
B. Why Vibecoding Felt Addictive
There’s a reason people lose entire weekends to this kind of work.
You’re never blocked.
There’s always something to respond to. A suggestion to refine. A function to tweak. A clarification to ask for. Traditional coding has long stretches of friction where nothing moves. Vibecoding replaces that with constant motion.
Progress feels continuous, even when it’s illusory.
Every interaction creates the sense that you’re one prompt away from clarity. One explanation away from understanding. One refactor away from elegance. That’s a powerful psychological loop, especially for people who like building systems but haven’t had the time, energy, or emotional space to do it hands-on in years.
Stopping feels harder than starting.
Not because the work is so important, but because the system always feels almost there.
That “almost” is the hook.
C. The Caretaking Shift
There was a clear point where the work changed character.
Early on, the job felt like creation. Writing. Assembling. Exploring.
Later, it felt like supervision.
At some point, I stopped asking, “What should this do?” and started asking, “What must not change?” The effort moved away from generating code and toward protecting the fragile mental model that had formed around the system.
That’s when it became obvious that the primary labor wasn’t typing or prompting. It was caretaking.
Guarding assumptions. Preserving naming consistency. Preventing the quiet erosion of decisions made hours earlier. The AI wasn’t an author anymore. It was a very capable junior collaborator who needed constant context and firm boundaries.
This is emotionally different from traditional debugging. You’re not fixing a mistake someone made. You’re preventing a mistake someone might make if left alone too long.
That subtle shift is exhausting in a way I didn’t expect.
D. What This Was Not
It’s important to be explicit about what this experience doesn’t represent.
This was not automation.
It was not replacement.
It was not speed without cost.
It was not “anyone can do anything instantly now.”
Nothing about this removed responsibility. If anything, it concentrated it.
The AI didn’t make decisions disappear. It surfaced them faster and more often. It didn’t absolve me of understanding the system. It punished me when I failed to.
If you walked away from this thinking the machine did the work, you missed the point.
The machine accelerated the conversation. The human still owned the outcome.
E. What I Deliberately Left Out
There are parts of this story that don’t belong here.
Not because they’re secret, but because they require distance that doesn’t exist yet. Some observations only make sense once the emotional charge is gone. Others would pull the narrative forward in time in ways that distort what was actually known then.
There are also things that would turn this from a record into an argument, or from an essay into a guide.
That was never the goal.
Closing Note
If you’ve made it this far, you probably recognize pieces of yourself in this story.
That’s not an accident.
Periods like this don’t last long. The tools stabilize. The workflows harden. The stories get cleaned up. What remains later is usually a simplified version that forgets how improvisational and fragile it actually felt.
These notes exist to keep a little of that texture intact.