I just finished reading this thoughtful post by Tom Dettmers in which he argues that we won’t reach artificial general intelligence or super intelligence. It was a bit technical for me so I asked NotebookLM to simplify it for me:
This blog post by Tim Dettmers argues that the popular idea of **Artificial General Intelligence (AGI)**—an AI that can do anything a human can—and **“superintelligence”**—AI that becomes infinitely smarter than humans—are **unrealistic fantasies** [1, 2]. He believes these concepts ignore the harsh realities of the physical world [2, 3].
Here is a breakdown of the key concepts to help you understand his argument:
### 1. Computation is Physical, Not Abstract
Many people in the tech industry treat AI as a set of abstract ideas or math problems [3]. Dettmers argues that **computation is physical**; it requires actual hardware, energy, and space [4, 5].
* **Memory vs. Speed:** To make AI work, information must move across a chip [6]. As chips get smaller, the “math” part becomes cheap, but moving the information (memory) becomes more expensive and slower because it has to travel across physical distance [5, 6].
* **Biological Comparison:** He points out that nature has already hit these limits [7]. Human brains can’t get much bigger because they require too much energy (calories) for a body to sustain [7]. He believes digital AI is hitting similar **energy and physical boundaries** [8].### 2. The “Resource Trap” (Diminishing Returns)
A central theme of the post is that **linear progress requires exponential resources** [8, 9].
* This means that to get a steady, “straight line” improvement in AI performance, you have to throw massive, ever-increasing amounts of money, electricity, and hardware at the problem [8, 10].
* We have reached a point where the cost of making AI just a little bit better is becoming **physically and economically impossible** to maintain [2, 11].### 3. The End of Rapid Hardware Growth
For years, AI improved quickly because the chips powering it (**GPUs**) were getting much better every year [9]. Dettmers claims this era is over:
* **Hardware Plateau:** GPUs “maxed out” around 2018 [9]. Recent improvements have been “one-off” tricks that cannot be repeated [9, 12].
* **No “Magic” Left:** He argues we have essentially seen the last generation of significant GPU improvements, meaning we can no longer rely on better chips to make AI smarter [9, 13].### 4. Why Superintelligence is a “Fantasy”
The theory of superintelligence suggests that once AI reaches a certain level, it will “self-improve” in a runaway loop, becoming God-like [14]. Dettmers disagrees for two reasons:
* **Physical Limits:** Even a super-smart AI is still trapped by the laws of physics; it cannot “think” its way out of the need for more energy or the limitations of hardware [13, 14].
* **Gap-Filling vs. Frontiers:** He believes superintelligence might be good at filling in small gaps in what we already know, but it won’t be able to make massive leaps forward because it will face the same **diminishing returns** that humans do [14].### 5. Practical Use (Economic Diffusion) vs. Hype
Dettmers contrasts two ways of thinking about AI:
* **The “Winner-Takes-All” Approach (US):** The belief that whoever builds the biggest, smartest AI model wins everything [15]. He views this as short-sighted [16].
* **The “Integration” Approach (China):** The focus on making AI “good enough” and integrating it into daily life and industry to increase productivity [17, 18].
* He argues that the future of AI isn’t about a single “super brain,” but rather about **economic diffusion**—finding ways to make current AI useful in the real world [2, 19].***
**An Analogy to Solidify Understanding:**
Imagine you are a professional runner. When you first start training, you can shave minutes off your mile time with just a little effort. However, once you become an Olympic athlete, shaving just **one single second** off your time requires millions of dollars in coaching, specialized diets, and thousands of hours of grueling work. Eventually, you hit the **physical limit** of what a human body can do. Dettmers is saying that AI is like that Olympic athlete: we have already done the “easy” part, and now the resources required to get even slightly faster are becoming more than the world can provide.
Here’s a counterpoint to Tim’s argument by Boaz Barak of OpenAI:
This blog post is worth reading and has some good points, though ultimately I think wrong.
There is no question that computation is physical. But the question is how far we are from the absolute physical limits. There is zero reason to think that evolution designed humans to be at the maximum limits of physical intelligence, any more than we can withstand zero Kelvin or travel at the speed of light.
It is true that some particular fruits have been picked (e.g., probably can’t do 0.5 bit precision, though even that’s not clear, e.g. sparsity) but it doesn’t mean people won’t find other ways to advance. There is this nice article on Moore’s law and how the industry was able to “squeeze” more gains once certain avenues dried up
https://newsletter.semianalysis.com/p/a-century-of-moores-law
There are multiple axes of possible improvement across hardware, architecture, data, algorithms, and I believe we will see more improvements on all of these dimensions. It’s still early days.
Join the Conversation
Share your thoughts and go deeper down the rabbit hole