Skip to main content

What It Means to Be Human in the Age of AI

A few thoughts what AI might do the human ability to think critically.

What It Means to Be Human in the Age of AI

Writing in The Free Press, Tyler Cowen wrote that AI is not only transforming our economy, but also our very understanding of what it means to be human.

We stand at the threshold of perhaps the most profound identity crisis humanity has ever faced. As AI systems increasingly match or exceed our cognitive abilities, we’re witnessing the twilight of human intellectual supremacy—a position we’ve held unchallenged for our entire existence. This transformation won’t arrive in some distant future; it’s unfolding now, reshaping not just our economy but our very understanding of what it means to be human beings — Tyler Cowen.

In 2025, as artificial intelligence slowly snakes its way into our lives, humanity and all the elements that give life meaning are up for renegotiation. And we’re not ready. Forget answering the question, we’re not even ready to grapple with it.

The Colossus (c. 1808–1812) by Francisco de Goya. Public domain. Source: Wikipedia

Even if AI progress stopped today, even if LLMs didn’t improve from this moment, a significant increase in adoption alone could automate away vast swaths of white-collar and so-called “knowledge work.” Anyone who thinks otherwise is deluding themselves. If you’ve used these tools seriously and looked at how companies are starting to implement them, you know this isn’t fantasy—it’s already happening.

Given that this is a fundamentally disruptive technology that could upend all major aspects of our lives and the economy, I’ve been looking for simple mental models and heuristics to make sense of it. Ignoring it isn’t an option. Listening to experts isn’t much better, because their views differ so wildly it’s hard to tell what’s sensible, what’s fiction, and what’s just shiny nonsense. The spectrum runs from common sense to astrology.

What I’ve realised is that, given how historic this technological shift is, if our uncertainty band on AI doesn’t range from 0 to 100, we’re doing it wrong. Having neat, definitive conclusions about AI isn’t just a fool’s errand, it’s dangerous. So my baseline assumption is utter and complete disruption.

By that I mean assuming AI will reshape the entire economy and humanity wholesale. Not in a Terminator or Transcendence way, but in a way that changes the nature of work, our relationship with it, and the structure of the economy.

In looking for mental models and frames to think about AI, I try to listen to thoughtful people. One of my favourites is Derek Thompson’s _Plain Englis_h podcast. In a recent episode with Cal Newport, he made an analogy I loved, comparing the act of thinking to “time under tension” in fitness:

Derek: Can I offer an answer that I’ve been thinking about?

Is this a general answer to the question of what should we teach our children? What should our children value? Might be even closer to what I’m trying to get at here.

Was talking about this recently at a talk in exercise, in weightlifting, there’s this concept called time under tension.

So you can do a bench press in three seconds, or you can do a bench press in 10 seconds, or you can do the same bench press in 20 seconds. And you know, slow, slow, slow up, slow, slow, slow down. It’s the same rep, but it’s much harder. It’s time under tension.

I feel like we’re in an age right now where young people are reading less. Book reading rates have really declined significantly, even at elite colleges. And now with AI, as you’ve been explaining, students can write less because, Jack, CBT will always be gained to do your homework.

And I feel like if students aren’t reading as much and they’re not writing as much, where’s the thinking coming from, right? The best ideas that I’ve come up with tend to come from me being able to sit with a group of thoughts that are far-flung in far-flung departments of my brain and having the patience to sit with them for a long period of time until they cohere into something combinatorially new.

And I think of that as the cognitive equivalent of time under tension, right? Without the capacity for long-form reading or writing. I worry that we’re just going to lose that. It’ll just be gone.

And so my answer to this question of what should we teach young people—what should they value academically?—I would want my children to be masters of cognitive time under tension. This kind of academic patience will pay dividends, whether they want to be a theoretical computer scientist or a novelist.

How does the idea of time under tension sit with you as you think about some of the awkward conveniences of AI for students?

For much of human history, there was a deep chasm between curiosity and the answer to that curiosity. The printing press began to bridge it, but even centuries later, finding an answer meant hours or days of searching through books, in libraries, often without success. Intellectual growth happened in that gap. The time spent trying to find an answer, failing to find it, and reformulating a better question—that was the mental heavy lifting.

In grappling with answers, our brains not only worked to solve questions but to refine them. Those mental gymnastics, those deadlifts, are what built our intellectual development.

Now, for the first time, large language models have bridged and sealed that chasm. There are just seconds between question and answer. Which raises the question: if thinking is what makes us human, what happens when we stop?

Comments

Join the Conversation

Share your thoughts and go deeper down the rabbit hole