Uff. Brutal post comparing vibe coding to gambling:
With vibe coding, people often report not realizing until hours, weeks, or even months later whether the code produced is any good. They find new bugs or they canβt make simple modifications; the program crashes in unexpected ways. Moreover, the signs of how hard the AI coding agent is working and the quantities of code produced often seem like short-term indicators of productivity. These can trigger the same feelings as the celebratory noises from the multiline slot machine.
Vibe coding provides a misleading feeling of agency. The coder specifies what they want to build and is often presented with choices from the LLM on how to proceed. However, those options are quite different than the architectural choices that a programmer would make on their own, directing them down paths they wouldnβt otherwise take.
Both slot machines and LLMs are explicitly engineered to maximize your psychological reaction. For slot machines, the makers want to maximize how long you play and how much you gamble. LLMs are fine-tuned to give answers that humans like, encouraging sycophancy and that they will keep coming back. As I wrote in a previous blog post and academic paper, AI can be too good at optimizing metrics, often leading to harmful outcomes in the process.
I always find the twitter discussions accompanying these posts interesting and entertaining.
Join the Conversation
Share your thoughts and go deeper down the rabbit hole