Bhuvan's law
I was listening to a conversation between Jack Clarke, co-founder of Anthropic, and Tyler Cowen. It was an interesting conversation with some interesting things that made me go, “Ooh, that’s interesting,” and a lot of vague, hand-wavy, speculative predictions that are unknowable, unprovable, and hard to mentally digest. In Jack’s defense, these excitable conversations about the world AI portends are characteristic of most AI-related discussions because the future version of AI is not here yet and the present version of AI is a stochastic bullshitter.
As I finished listening to the conversation, I remembered this saying: “Any sufficiently advanced technology is indistinguishable from magic.” This is the third law formulated by the famous science fiction writer Arthur C. Clarke. I learned about this very recently.
After listening to Jack Clark, I’ve come up with a law of my own, which I henceforth dub “Bhuvan’s law,” about AI discourse based on Arthur Clarke’s third law:
“Any discourse about artificial intelligence is indistinguishable from talking out of your ass.”
Now, I don’t mean this in a bad way. I’d go so far as to say even some of our most scientific theories, until proven, were indistinguishable from words pulled out in a specific sequence out of one’s ass. When Democritus proposed that everything in the universe is made up of tiny, indivisible, and indestructible units called atomoi (atoms) around 400 BCE, I’m 100% sure Socrates, Anaxagoras, Plato, and other contemporaries bent over, showed Democritus their asses, and made muffling sounds to say that Democritus was talking out of his ass.
In a Popperian sense, theories and conjectures have to be proven wrong by evidence. If they are vague or unfalsifiable claims, they are but sounds emitted from the wrong opening in the hind part of the lower abdominal region (ass-talking).
Why am I formulating this law?
Like any sane person, I am both excited and terrified about AI and the world it’s threatening to unleash. That means I’m compulsively consuming information in the hope—futile?—that making sense of this seemingly magical and civilization-defining technology will be easier.
After over a year of consuming different perspectives about AI on a spectrum, I have reached the conclusion that most people have no fucking idea what they are talking about. Most people aren’t talking about AI in English but rather out of their asses.
Now, mind you, I am not saying that AI is bullshit or that it isn’t here or that nothing will change. I have no bloody clue, and neither do most “experts.” What we call AI, for the most part, are large language models. LLMs are a consequential technological paradigm shift, more consequential than most people realize. I’m firmly in the camp that they are far more advanced than the critics claim, and they can automate more entry-level, menial, and basic jobs than the critics care to admit.
Having said all that, when most people say “AI,” they’re referring to large language models. But LLMs are not the AI we see in Terminator, Transcendence, Her, or The Matrix. There’s a massive gap between today’s impressive-but-limited language models and the general artificial intelligence of science fiction. This gap forces everyone—experts included—to extrapolate wildly about AI’s trajectory, and there’s nothing inherently wrong with that kind of speculation. However, the intellectual dishonesty lies in presenting these extrapolations as informed predictions rather than admitting they’re educated guesses at best.
Hence, Bhuvan’s law.
Join the Conversation
Share your thoughts and go deeper down the rabbit hole