Skip to main content

I gots Links for you: Issue #1

I gots Links for you: Issue #1

A few good links.


Kevin Munger on Anti-Mimetics

I always enjoy, although frustratingly so, reading Kevin Munger, who is probably one of the most interesting thinkers on all things media. I don’t always understand half of what he says because it’s often deeply technical, and I don’t come from a media studies background. But I make it a point to read his work because it’s phenomenally thoughtful. This post on anti-mimetics was delightful. I’m still processing it, but here’s an excerpt that stood out:

Humans are quite plastic; our sensory apparatus changes based on the communication environment in which we are raised. But we’re not infinitely plastic. The information-density frontier must involve all of our senses, telling us something about what the human is, what evolution has designed us for. When are our senses most heightened? When the stakes are high and we are physically engaged with many other people. Team sports. The high school dance. The street protest. The memes in these context are physical processes using all of our sensory inputs to react to the behaviors of many other people simultaneously.

So, The Poster is correct that the meme is (potentially) the densest form of communication within the degraded artificial space of a feed-based social media platform. But these platforms’ antimemes are the embodied, social processes that cannot be encoded as digital media, and they are far more information dense than anything that happen on a screen.

Note: What are Memes and Antimemes?

To understand what Kevin Munger is getting at, it helps to clarify two key terms he’s playing with: memes and antimemes:

A meme—in its original sense—is an idea, behavior, or style that spreads from person to person, like a cultural gene. In the internet age, memes have become bite-sized units of communication: an image, a phrase, a remix, a trend. They travel fast, mutate easily, and carry meaning in compact form. In Munger’s words, they’re “the densest form of communication” on platforms like Twitter, TikTok, or Instagram.

But what happens when everything becomes meme-ified—flattened into screen-sized, optimized, shareable chunks? That’s where the idea of the antimeme comes in.

An antimeme isn’t the opposite of a meme, but rather something that refuses to become one. It resists being compressed, commodified, or transmitted through algorithms. You can’t turn it into a JPEG or a tweet. Antimemes are embodied experiences—what it feels like to be in a crowd, to dance with others, to play on a team, to protest in the street. They’re full-sensory, emotionally loaded, socially situated. They’re “too much” for digital media to contain.

So, in a way, Munger is saying: the most meaningful things humans do—the richest, most intense forms of communication—don’t live online. They can’t. And as we spend more time in feed-based platforms, we risk forgetting what those dense, real-life experiences feel like.


Yascha Mounk on the Absurdity of the AI Debate

A good and thoughtful essay on artificial intelligence and its broader importance by Yascha Mounk. The reason why I like this essay is he gets to the heart of the absurdity of the AI debate. Many people simply base their opinions on superficial use of these tools without actually using them in a variety of ways or pushing them to their limits. They haven’t tested the tools deeply and haven’t explored edge cases or weird use cases and yet, they pronounce verdicts.

Now, I’m not saying AI is good or bad. At this point, I don’t have a firm view. But what’s increasingly clear to me is that it’s idiotic to have an opinion on artificial intelligence unless you not only use these tools regularly but also follow the progress of AI technologies over time.

My default model for thinking about AI is twofold:

  1. Assume disruption by default. I don’t have evidence that AI will reshape everything—but that’s the vibe I’m operating from. It’s a sci-fi prediction. A base assumption I update from.

  2. Be a good Bayesian. Start with that assumption, use the tools, read the research, listen to smart people, and keep updating your beliefs as new information comes in.

So, going back to the essay, I want to share two quotes that stood out to me:

The idea that AI chatbots are merely “stochastic parrots” is rooted in an uncontested truth about the nature of these technologies: the algorithms really do draw on vast data sets to predict what the next word in a text, or pixel in a painting, or sound in a piece of music might be. But evocative though the invocation of this fact may sound, it does not magically make the prodigious abilities of artificial intelligence disappear. If chatbots fulfill tasks in the blink of an eye over which skilled humans used to labor for weeks, this advance will transform the world—whether for good or ill—irrespective of how the bots are able to do so.

Ten years ago, the conventional wisdom held that technological advances would imperil many blue-collar jobs, like those of truck drivers. Now, the astonishing advances in text-based AI have convinced many commentators that white-collar professionals, from paralegals to HR professionals, will be the first to lose their job. But it is worth noting that there is another very large hammer which has not yet fallen. While it has turned out to be more difficult to build robots which can maneuver around the physical world with dexterity than to build chatbots that can perform high-level cognitive tasks, there will come a time in the relatively near future in which machines capable of doing both tasks simultaneously will be produced in large numbers. At that point, both white-collar and blue-collar jobs will be imperilled en masse.

A Note on Jia Tolentino

In his essay, Yascha Mounk references a New Yorker piece by Jia Tolentino in which she admits she hasn’t used ChatGPT yet. And then she goes on to highlight the flaws of ChatGPT, like hallucinations, servility, and that sort of thing. Yascha rightly calls this out as an example of people making confident judgments about AI without actually using the tools.

And honestly, there’s some truth to that. There’s a lot of empty punditry floating around.

I’ve read the essay, and Jia Tolentino is a remarkably thoughtful writer, and I’ve long been a fan. She’s not just making lazy critiques in the piece but rather something deeper. She’s interrogating what it means to be human in an age where machines can mimic us.

The reason why she says all those things about ChatGPT is to articulate a fear, or maybe a question: what happens to us when we start outsourcing all the things that make us human?

That’s not technophobia. That’s clarity:

People are producing A.I.-manipulated self-portraits on platforms that can reserve the right to use those images in advertisements. Scammers are using live deepfakes in video calls, changing their race, gender, and voice in real time. By the time my kids are preteens, it will be easy, and probably free, to generate customized porn featuring the people of their choice. I expect that it will not seem shocking to them, as it does to me, if a chatbot serving as a virtual girlfriend encourages one of their peers to die by suicide. I imagine the ludicrous lectures I’ll give them: “Darlings, it’s so much better to look at an actual, imperfect human nude.” If I were in tenth grade and bored out of my mind at midnight with an unfinished paper, I would turn to technology for help. Will I be able to convince them that the only worthwhile parts of my mind are those which have resisted or eluded the incentives of the internet? My kids are at an age when nothing excites them like the chance to do things unassisted. They have just a few years before they learn that adulthood, these days, means ceding more and more to machines.

Follow-on Note

In the post, Tolentino writes about Donald Trump and his relentless assault on American democracy, but it triggered a larger thought:

Today’s autocrats often don’t need to resort to mass detentions, disappearances, or overt state violence—though remnants of that still exist. Instead, their power comes from something more insidious: a full-spectrum attack on our shared sense of reality.

By relentlessly polluting the information environment through lies, half-truths, contradictions, and sheer volume, they turn reality into a kind of funhouse mirror. What once felt sharp and legible becomes warped, confusing, and contradictory. The public is left reacting to distorted images, unable to agree on basic facts. In doing so, we unknowingly play into the hands of those who engineered the distortion.

Reality ceases to be a shared space. It becomes a weaponized fog. And that’s what makes today’s authoritarians far more sophisticated than the dictators of the past. They don’t have to disappear you. They just have to disappear the truth.

Right after I wrote the note above, I came across this note by Venkatesh Rao. He succinctly captured one element of the feeling I was trying to express:

Blindfolds are just as effective at suppressing free speech as muzzles. What’s the point of being able to talk if signal generators whose output is worth talking about are systematically turned off?

Corporations for public broadcasting is shutting down. Climate data sources and collections ops are shutting down. Economic indicator data are undermined if politically unfavorable (next BLS head will undoubtedly discover that the economy is adding record new jobs for the next 40 months). Inflation measurement is under siege.

I’m not a particularly data-driven or news-driven guy but this war on reality feedback signals seems like the most dangerous of many underway.


Adam Tooze on China as the Rosetta Stone

There’s something peculiar about how people talk about China. If you say something even remotely positive, you’re labeled a communist or a Marxist. If you’re critical, you’re dismissed as a China-basher. It’s a bipolar, zero-sum framing that’s idiotic and unhelpful.

To truly understand China requires—borrowing from F. Scott Fitzgerald—the ability to hold two opposing ideas in mind at the same time. China is, at once:

  • A deeply troubled economy flirting with deflation
  • And simultaneously, the most advanced industrial superpower on the planet

Most people—and most headlines—can’t deal with that tension. But Adam Tooze can.

In a recent podcast appearance on Sinica, Tooze made a fascinating point: China isn’t just a piece of the global economy. It’s the Rosetta Stone of modern development.

What do we mean by “world affairs”? Usually, we mean the European empires—but also, and crucially, China. Not as a fully colonized entity, but as a semi-colonial, non-Western power.

That was the moment where things clicked for me. I realized there’s a generational challenge, especially for Western thinkers trying to write the history of modernity: you simply can’t do it without a profound understanding of China. Not just thinking about China, but thinking from China—using China as a lens.

Why always think toward China as the exotic other? Why not think outward from China?

To answer your question properly, I’d say: China isn’t just an analytical problem. It is the analytical problem. I wrote a bit about this in Chartbook, based on work by the geographer Jamie Peck, who builds on Stuart Hall’s concept of the conjuncture. I agree with him—China is not just one country among many. It’s the master key. If there is a way to understand modernity as a whole, China has to be at the center of it. Without that, you’re fumbling in the dark.

So that’s my starting point. Which is why I’m deeply skeptical of any off-the-shelf theory—any cookie-cutter framework. That’s the move I actively resist. If you ask about my method, it starts there: with a refusal to impose ready-made answers. I’m genuinely interested in what a 21st-century Marxism might look like, and I don’t think we can get there without reckoning with China first.

What Adam Tooze does in this piece is unpack the traditional, surface-level view of China. The kind of view that treats it like just another macroeconomic juggernaut. That framing reduces China to a big number: 20% of global GDP. It analyzes trade surpluses and investment flows, maybe adds some headlines about real estate, and calls it a day.

But Tooze argues that this lens is completely inadequate. It misses the world-historic development China has undergone in the past three to four decades. This is a country that, in the 1970s, had a per capita income lower than Sudan and Zambia. And now it’s on the cusp of being a high-income country. That’s not just growth, it’s transformation on a civilizational scale.

And he’s right. Whatever your views on China’s political system, like its autocratic tendencies or its Marxist or Leninist structures, none of that changes the underlying fact: China is the greatest development story in modern history.

Even as recently as the 1990s, India was still ahead of China on many indicators. And now? China is in a league of its own. It’s marching steadily toward becoming the world’s largest economy.

These two quotes from Tooze’s piece illustrate exactly what he’s getting at.

But as useful as it is, this macroeconomic approach also minimizes the drama of history and qualitative transformation. China’s economy is huge because it encompasses the material destiny of one sixth of humanity. In the 1970s, China’s national income per head was less than that of Sudan and Zambia. It was not just the most populous country in the world but also one of the poorest. China’s ascent during the age of globalization is not just one economic story amongst many. It is the single most dramatic development in world economic history, bar none.

No, clearly not. The “real estate” boom in China that came to an abrupt halt in 2020/2021 was not simply a bubble within a well-established market, say in London, or Florida. China did not even have anything like private ownership of real estate until the late 1990s. Then in a space of a single generation it engaged in the largest construction boom in history, so much so that almost 90 percent of Chinese homes have been built in the last thirty years. In the same 25 years, roughly 500 million Chinese, that is the entire population of Europe, moved from the countryside to the city.

This was no ordinary real estate boom. It was a world historic process of resettlement. China’s “real estate boom” was a major causal driver of nothing less than the anthropocene, humanity’s fundamentally altered relation with the planetary economic system. The quantity of steel and concrete that were poured and bashed into the ground in China changed the physical shape of the planet.


On the Crisis of Meaning and the Myth of Religious Revival

There seems to be a crisis of meaning everywhere. Or at least that’s what I’m led to believe, judging by the sheer number of articles and videos circulating on the subject. The most common explanation for this “crisis,” of course, is the decline of religion. We killed God, and now we’re adrift.

This is something I’ve been deeply fascinated by, for a variety of reasons. I try to follow this discourse as best I can. One particular subplot I’ve noticed is the supposed revival of religion, especially in parts of the Western world. I’m not sure if this applies to India. While it’s still deeply religious, I get the sense (vibes, anecdotes) that religiosity is quietly waning here too, albeit not in a dramatic fashion.

But coming back to the West: I recently came across a fascinating article by Virginia Weaver, who challenges this narrative. She casts doubt on the idea that we’re living through a religious renaissance and argues instead that the story is more complicated—and less hopeful for those who imagine a return to tradition will solve the meaning crisis.

It’s a smart, sobering piece and well worth a read.


Hannah Cairo, Tyler Cowen, and the Future of Learning

I read this wonderful piece in Quanta Magazine about a 17-year-old named Hannah Cairo, who solved a major math mystery. The shocking part? She grew up in the Bahamas, entirely homeschooled, and taught herself everything from scratch online from sites like Khan Academy. And at just 17, what she pulled off is nothing short of ridiculous.

Of course, she’s an exception. MOOCs and free learning resources have been around for a while. But it takes a special kind of intent and intellectual curiosity to go all the way with it. Most people just end up swiping garbage on TikTok and Instagram.

Cairo grew up in Nassau, the Bahamas, where her parents had moved so that her dad could take a job as a software developer. She and her two brothers — one three years older, the other eight years younger — were all homeschooled. Cairo started learning math using Khan Academy’s online lessons, and she quickly advanced through its standard curriculum. By the time she was 11 years old, she’d finished calculus.

Soon she had consumed everything that was readily available online. Her parents found a couple of math professors to tutor her remotely — first Martin Magid of Wellesley College, then Amir Aazami from Clark University. But much of her education was self-directed, as she read and absorbed, on her own, the graduate-level math textbooks that her tutors recommended. “Eventually,” Cairo recalled, Aazami “said something like, he feels uncomfortable being paid, because he feels like he’s not really teaching me. Because mostly I would read the book and try to prove the theorems.”

Her story reminded me of something I once heard Tyler Cowen say in a lecture:

“The future—look, a lot of the details are hard to predict. But here’s what I’m pretty sure of:

If you have a mobile phone, there will be some version of AI you can access, and it’ll be quite good.

Let’s say you’re in Kenya. Kenya is a relatively poor country, but it has excellent internet connectivity. My wife and I went on safari there—we were out with the lions and elephants, and the internet was better than in our own living room. It was amazing.

So think about that: you can be anywhere in Kenya, with just a mobile phone, and you can access the world’s best education, possibly in the language of your choice—even in lesser-known dialects.

Of course, many people won’t use it. They won’t know how. Or they’ll stick with traditional ways. But not all humans are like that.

As Mises and Kirzner taught us, people are entrepreneurs. They want to do better—for themselves and their families.

So we’ll see people, even in difficult circumstances, getting incredible educations—often for free.

GPT-4 currently costs $20 a month. That’s unaffordable for most people in Kenya. But there are free versions. They’re not as good—but they’re improving.

There’s a free Chinese model called DeepSeek. Have you tried it? It’s free.”
— Tyler Cowen

Andrej Karpathy:

2024: everyone releasing their own Chat
2025: everyone releasing their own Code — Andrej Karpathy

Speaking of AI, here’s an excerpt from a piece by Harvey Lieberman, an 81-year-old psychologist, writing in The New York Times about his experience using ChatGPT:

I concluded that ChatGPT wasn’t a therapist, although it sometimes was therapeutic. But it wasn’t just a reflection, either. In moments of grief, fatigue or mental noise, the machine offered a kind of structured engagement. Not a crutch, but a cognitive prosthesis — an active extension of my thinking process.

ChatGPT may not understand, but it made understanding possible. More than anything, it offered steadiness. And for someone who spent a life helping others hold their thoughts, that steadiness mattered more than I ever expected.


On Thinking, Silence, and Rehabilitating the Mind

This post on how to “unrot your brain” struck a chord with me, especially the passage I’ve quoted below. The reason it hit so hard is because I’ve been thinking about the act of thinking itself. Sitting in silence. Just… being with your own thoughts:

If you’re anything like me, you miss thinking for the sake of thinking. You want the curiosity back. The attention span back. The ability to sit with a question, really sit with it, without reaching for a distraction every ten seconds. To want to engage again. To feel mentally present. To reclaim the part of you that used to light up at complexity instead of shutting down. You want to feel sharp again. Capable. Awake. Lately, I’ve been trying to unrot my brain. Or maybe a better word is rehabilitate. Rebuild. I’ve been coaxing my mind back into movement, the way you’d stretch a stiff joint or retrain a weak muscle. Not to be productive. Not to prove I’m still “the smart one.” Just to feel like me again. Because somewhere underneath the scroll fatigue, the algorithm brain, the constant static, I know she’s still in there. The girl who asked questions for fun. The one who highlighted entire pages. The one who wasn’t afraid of a hard book, or a weird one, or one that made her feel small in the best kind of way. If she’s in you too, here’s where we start.

Pascal said it much earlier:

All of humanity’s problems stem from man’s inability to sit quietly in a room alone. — Blaise Pascal

I really believe that half our problems would vanish if we just learned—or more accurately, relearned—how to be comfortable with ourselves. Comfortable with our thoughts, no matter how disconcerting they may be. For long, uncomfortable, unmedicated stretches of time.

But instead, through a series of conscious and unconscious decisions, we’ve cut out silence entirely. No—we’ve castrated it. And in the vacuum left behind, we’ve inserted our phones. Now whenever boredom creeps in—when we’re waiting in line, sitting pillion on a bike, or standing for coffee—we reach for the anesthetic. Swipe. Scroll. Thumb. Swipe again.

This is not a rant about smartphones or tech. That’s for another day. This is about the numbness. The fear of sitting with ourselves.

Coincidentally, this article on David Foster Wallace’s prescient take on our cultural degeneration was next in my read-later feed:

All of the side effects of his society have become mainstays and defining features of our own. Jokes are made sarcastically, advertisements mock themselves, digital identities carry more weight than physical ones, podcasts pose as honest, real conversations, and paranoia sits at the root of news consumption. Nothing feels real. Sincerity is devalued. Everything is affectation. Worse still, our isolation and insatiable thirst for pleasure have been exacerbated by social media. One of Wallace’s greatest fears that he consistently echoes in interviews is that people are growing increasingly addicted to entertainment and decreasingly comfortable with themselves. Our ability to sit alone or commit our attention to a singular activity for an extended period of time has utterly dissipated. He explains in an interview with David Lipsky: “…as the Internet grows, and as our ability to be linked up…at a certain point, we’re gonna have to build some machinery, inside our guts, to help us deal with this. Because the technology is just gonna get better and better and better and better. And it’s gonna get easier and easier, and more and more convenient, and more and more pleasurable, to be alone with images on a screen, given to us by people who do not love us but want our money. Which is all right. In low doses, right? But if that’s the basic main staple of your diet, you’re gonna die. In a meaningful way, you’re going to die.”


Sam Kriss vs. The Rationalists

Sam Kriss’s tirade against the rationalists is fucking funny and brilliant. It’s a flamethrower aimed straight at utilitarian logic, and the sheer absurdity of his thought experiment had me howling:

Everyone has their own favourite example of how utilitarianism can wildly contradict our moral intuitions. Mine is gladiatorial combat. Let’s say I kidnap you off the street, keep you captive in my basement, and then make you fight another random abductee to the death for my own sick amusement. This seems less than ideal, ethically speaking.

Now let’s say I invite a few friends over, and we laugh and drink aperol spritzes and other nice summery cocktails while you desperately try to claw someone’s eyes out. This is, if anything, worse. Some forms of pleasure are bad. (Some forms of pain are good!)

But now let’s say I film the whole thing and broadcast it online, and hundreds of thousands of people watch as you’re throttled to death, all of them deliriously masturbating. I think this would be a genuine moral catastrophe, but at this point the utilitarian starts perking up. Maybe things aren’t so terrible. What’s the exchange rate? How many orgasms balance out a violent death?

Finally, we get to the point where huge public screens across the world are showing the light fade from your eyes. Billions watch in shuddering, sadistic glee. According to any sensible ethical system, we’ve entered the abyss. Our entire civilisation deserves to be destroyed. For the utilitarian, we have just performed the single most moral act in human history. In fact, we have an urgent ethical duty to do it again.

Did We Really Evolve to Eat Meat?

Probably not—at least according to Gidon Eshel, Research Professor at Bard College. I came across this piece and found it compelling not because it moralizes about meat consumption, but because it undercuts one of the most common evolutionary assumptions: that humans had to eat meat to thrive. Eshel’s analysis shows that plants could’ve done the job just fine, nutritionally speaking:

With that, let’s reexamine the “paleo” diet. Suppose, despite these game-changing differences, that a “paleo” diet is nutritionally wise for modern humans; is it deployable? I’d say not even minimally, because it is practically impossible to ever find meat, cereals, or greens that even vaguely resemble their paleolithic predecessors. For example, even lean grass-fed beef or bison, the nearest crude modern analog to hunted Pleistocene fauna, are still 2 to 3 times fattier than wild meat and surely even more distinct in micronutrients. Likewise, how similar to their naturally occurring counterparts are, for example, manicured arugula or hyper-bred strawberries? Likely not particularly.

In my decidedly unmanicured yard, the wild strawberries are as distinct in shape, size, taste, and abundance from their grotesquely enormous modern counterparts as a modern confined dairy cow is from her Holocene aurochs progenitors. One proxy for this comparison is the difference between organic produce and conventional counterparts, where large micronutrient differences are observed, yet likely understate the differences we are after, because modern organic produce is anything but wild. While further research is needed, the case for a “paleo” diet that can be reasonably characterized as promoting health in the 21st century is yet to be made. A recent effort to evaluate the paleo diet concluded that the current evidence is insufficient to recommend it even for the clinical objective the diet is most likely to achieve: diabetes management.


AI Will Put Us in Museums (And Not for Good Reasons)

Speaking of AI again, this piece by Dan Silver is a dark and comical take on the future of knowledge work. His prediction? We’ll be immortalized in museums. Kids on school trips will get to relive the crushing drudgery of typing emails, writing memos, and formatting PowerPoints.

A related impulse (maybe its counterpoint) is to reenact that labor itself. I once stayed in a New England town where one of the top tourist draws was visiting a small, still-functioning assembly line. You got to wear a hairnet. You could try your hand at tightening one bolt as a piece moved past you on a conveyor belt. A few steps of low-stakes repetition, just enough to say you’d done it. Like apple picking, it gave you a glimpse of a seemingly bygone form of work (for you at least). Not the real thing, of course, but enough for a theme-park souvenir.

Which brings me to my actual point. The knowledge economy, too, is now passing into the realm of cultural memory. With generative AI, remote work, and a growing detachment from the office as a physical place, the knowledge worker’s world is already halfway to the museum.


The Tragedy of Ignaz Semmelweis

I saw this tweet by Alec Stapp and had to fact-check it:

The scientist who discovered that hand washing dramatically reduces mortality during child birth was ridiculed by his colleagues and committed to an insane asylum. In the asylum, he was beaten by the guards and died 2 weeks later after a wound on his hand had become gangrenous.

I asked Claude to help me verify this, and here’s what I learned:

The story is largely true, though it compresses a complex tragedy into a single tweet. Ignaz Semmelweis, a Hungarian physician, discovered in 1847 that requiring doctors to wash their hands with chlorinated lime solution dramatically reduced maternal mortality from puerperal fever. His colleagues were outraged at the suggestion that they were causing their patients’ deaths, and he faced enormous resistance.

In 1865, when Semmelweis was 47, his mental health had deteriorated—possibly from early Alzheimer’s, syphilis, or bipolar disorder. He was lured to a Viennese asylum under false pretenses, realized what was happening, and tried to leave. He was severely beaten by guards, put in a straitjacket, and confined to a darkened cell. After two weeks, he died from blood poisoning caused by a gangrenous wound on his right hand—likely from the beating.

The autopsy revealed he died from the same type of infection he had spent his career trying to prevent. It’s a perfect, horrible irony: the man who discovered that hand hygiene saves lives died from an infected wound because he was brutalized for his discovery.

What makes this story even more tragic is the scale of preventable death. When Semmelweis instituted hand-washing protocols, mortality rates dropped from 18% to around 1%. After he was forced out, the rates climbed back up. Women continued dying from preventable infections for decades until germ theory was finally accepted.

Semmelweis wasn’t just ahead of his time—he was punished for being right.


Things Claude Taught Me

Orthopraxis vs. Orthodoxy Most Western religious traditions obsess over correct belief—orthodoxy. Christianity, Judaism, and Islam all place enormous emphasis on doctrinal purity: do you believe the right things about God, salvation, scripture? But many Eastern traditions prioritize correct practice—orthopraxis. In Hinduism and Buddhism, what you do often matters more than what you believe.

This isn’t just a theological curiosity—it reflects fundamentally different approaches to truth itself. Orthodox traditions treat truth as propositional: either Jesus is divine or he isn’t, either the Quran is the literal word of God or it isn’t. Orthoprax traditions treat truth as experiential: does this practice lead to liberation? Does it reduce suffering? The word comes from Greek: ortho (correct) + praxis (action). It’s the difference between faith as intellectual assent versus faith as embodied transformation.

The Rosetta Stone as Metaphor The actual Rosetta Stone was a granite slab discovered in 1799 by Napoleon’s soldiers in the Egyptian town of Rosetta. It contained the same royal decree written in three scripts: Ancient Greek (which scholars could read), Demotic (everyday Egyptian script), and Egyptian hieroglyphs (which had been a mystery for over a millennium). By comparing the known Greek text with the unknown hieroglyphs, scholars like Jean-François Champollion finally cracked the code.

When Adam Tooze calls China the “Rosetta Stone of modern development,” he’s making a profound analytical claim. China isn’t just another economy—it’s the key to understanding how modernity itself works. Just as the Rosetta Stone unlocked Egyptian civilization, understanding China’s transformation from rural poverty to industrial superpower unlocks the deeper patterns of how societies actually develop. Most development economics is built on Western models, but China’s path was different—and arguably more instructive for the majority of the world still trying to escape poverty.

The Problem with Utilitarianism Utilitarianism seems appealingly rational: maximize good, minimize suffering, treat everyone’s happiness as equally valuable. Founded by Jeremy Bentham in the 18th century and refined by John Stuart Mill, it promised to make ethics scientific. No more arbitrary moral rules—just calculate what produces the greatest happiness for the greatest number.

But push this logic to its extremes and you get Sam Kriss’s gladiatorial nightmare. If a billion people derive intense pleasure from watching you suffer and die, utilitarian math says your torture becomes not just permissible but morally required. The framework has no room for human dignity, individual rights, or the intuition that some acts are simply wrong regardless of consequences. It can justify forced organ harvesting (save five lives by killing one healthy person), punishing the innocent (if it deters crime effectively), or utility monsters who experience so much pleasure that everyone else should be enslaved to serve them. See also: the trolley problem and effective altruism’s sometimes disturbing conclusions.

Pascal’s Wager, But for Solitude Blaise Pascal’s famous wager argued we should believe in God because the potential upside (eternal bliss) is infinite while the downside (wasted Sundays) is minimal. It’s game theory applied to theology. But Pascal’s deeper psychological insight was more unsettling: “All of humanity’s problems stem from man’s inability to sit quietly in a room alone.”

This wasn’t mere misanthropy—Pascal understood that our flight from solitude is really a flight from ourselves. When we’re alone with our thoughts, we confront uncomfortable truths: our mortality, our failures, the gap between who we are and who we pretend to be. So we fill every moment with distraction—what Pascal called divertissement. In his era, it was conversation, gambling, hunting. Today it’s smartphones, social media, the endless scroll. The anxiety that drives us to reach for our phones the moment we’re bored is the same anxiety Pascal identified 400 years ago. We’ve simply gotten better at avoiding ourselves.

The Philosophy of Solitude: What Great Minds Understood Pascal wasn’t alone in recognizing solitude’s crucial role in human development. Friedrich Nietzsche saw isolation not as retreat but as forge: “The great epochs of our life are the occasions when we gain the courage to rebaptize our evil as what is best in us.” For Nietzsche, real transformation required leaving the crowd—and its comforting opinions—behind. Solitude strips away the masks we wear for others, forcing us to confront who we actually are beneath the performance.

Virginia Woolf understood that solitude wasn’t luxury but necessity: “A woman must have money and a room of her own if she is to write fiction.” Substitute “woman” for “mind” and it still holds. Creativity requires space—mental and physical—away from the demands and judgments of others. In her essay “A Room of One’s Own,” Woolf argued that without solitude, we can’t think our own thoughts; we can only recycle the thoughts others have given us.

Henry David Thoreau took this further, spending two years alone at Walden Pond to “live deliberately.” His experiment wasn’t about rejecting society but about understanding what parts of social life were essential versus what were merely habit. “I went to the woods to live deliberately, to front only the essential facts of life, and see if I could not learn what it had to teach, and not, when I came to die, discover that I had not lived.”

What these thinkers understood is that solitude isn’t just the absence of others—it’s the presence of self. Without regular periods of quiet reflection, we become strangers to our own minds. We lose touch with our genuine desires, values, and reactions, becoming instead collections of received opinions and social reflexes. The modern aversion to boredom, to sitting with uncomfortable thoughts, to being alone with ourselves, may be one of the most costly psychological changes of our era.

The Semmelweis Effect There’s actually a name for what happened to Ignaz Semmelweis: the Semmelweis effect (or Semmelweis reflex). It describes the tendency to reject new evidence that contradicts established beliefs or paradigms. The medical establishment’s violent rejection of hand-washing wasn’t just stupidity—it was a predictable psychological response to information that threatened their worldview.

Think about what Semmelweis was really saying: that respected doctors were unknowingly killing their patients through contaminated hands. This wasn’t just a technical correction—it was an indictment of their competence and moral standing. Admitting he was right meant admitting they had been wrong about something fundamental, that their medical education was incomplete, that they had blood on their hands (literally). The cognitive dissonance was unbearable.

We see this pattern repeat constantly throughout scientific history: continental drift (rejected for 50 years), heliocentrism (Galileo under house arrest), germ theory (doctors insulted by the suggestion they were unclean), stomach ulcers caused by bacteria (Barry Marshall had to infect himself to prove it). Truth doesn’t always win immediately; sometimes it has to wait for the old guard to die off. As physicist Max Planck put it: “Science advances one funeral at a time.”


End of dump. For now

Comments

Join the Conversation

Share your thoughts and go deeper down the rabbit hole