It’s kind of fun to watch many AI skeptics move from pure, unbridled skepticism about the usefulness of AI tools like Claude Code to becoming, if not outright boosters, at least genuine admirers of them.
And this is just one example. Across many such cases, the pattern I keep seeing is that people often have a terrible model for judging these tools.
First, a lot of people simply regurgitate other people’s opinions: that these tools are bad, that they hallucinate, that they are “stochastic parrots,” or whatever other cliché they picked up from a tweet, a video, or somewhere else.
Second, many people don’t actually use the latest cutting-edge tools and models. They either stick to the web interfaces, which are good but still limited in what they can do, or they use older models and then jump to sweeping conclusions.
But until people actually use the latest tools, like Claude Code, OpenAI Codex, or Cursor, they won’t really understand how far these systems have come. They won’t see just how remarkable these tools have become, how good they are at helping people solve not just mundane problems but genuinely complicated ones, and how much they can enhance human capability.
4w ago I was a Claude Code skeptic. I’m not a coder. None of the use cases were relevant. I managed teams & projects, drowning in email & overdue reminders. So I tried creating tools that would help me and… holy crap.
Now I’m sharing the tools I built: claudeblattman.com
Join the Conversation
Share your thoughts and go deeper down the rabbit hole