This week’s digest

Venkatesh Rao is consistently one of the most thoughtful people online. He is one of my favorite writers and one of the sharpest observers of whatever is happening around us, especially in technology. I have been following his thoughts on AI, specifically on making things with AI and more specifically on writing with AI.

Right now, there is a lot of angst around this. A lot of it, frankly, is just stupidity: this idea that if writing is touched by AI, it immediately stops being good. “AI-written” or “AI-assisted” has become a kind of weird epithet, as though all human writing is some sanctified thing that exists in a divine, unsullied realm. And somehow, the moment AI interacts with it, it gets tainted and becomes bad.

There is clearly a bit of a moral panic going on. People are incessantly labeling anything AI touches as a flop, or as inherently inferior, and so on. It is not only irritating, but also funny, because people seem to forget something basic: it is not AI that is flawed. Most people are flawed. People just need the sense to listen to themselves speak for a while, maybe in front of a mirror, and they will realize that even the median stochastic vomit produced by an LLM is better than 80 percent of the things we humans say.

But anyway, this is a weird moment. A new technology has entered the fray and is reshaping the way people think, create, write, and so on. This is a huge technological shift, and these old-fashioned, puritanical opinions are probably part of the course.

Take readers’ opinions seriously, but not too seriously. I’d say 1/6 have real spiritual-angst resistance. Ignore them. Another 1/6 will never get past attachment to gleegloom over the not-X-but-Y “tell” du jour. Ignore them too. Counterintuitively, at the other end, 1/3 of readers will be delusionally committed to declaring any and all AI writing “good” (similar to how committed feminists might insist that any work done by women is necessarily good). Ignore them too. The middle third is the target audience. People who will simply treat AI writing like human writing — keep reading if they like a particular piece but abandon if they don’t. You want those who discriminate at piece level, not class level (AI classists/racists?).

Finally: Reading good AI generated writing is a new kind of literacy that’s currently evolving from pidgin to creole. Most people not only lack this literacy (beyond having rudimentary hostile radars; a bit like being on alert for human accents that trigger distrust for you), they are unaware there even is a literacy taking shape. Developing this literacy yourself is the main goal for you as a writer. Catalyzing it in the eligible subset of readers is a nice extra. For me, this practically translates to a personal re-read test. Do I go back and read one of my own sloptraptions for the content later? About a third or so pass this test strongly.


Get ready to be a part of the permanent underclass!

While the laborers who were displaced by machines during the industrial revolution were able to move into new and growing service industries, allowing the economy to keep expanding and unemployment to stay low while incomes and living standards rose, with the higher productivity that comes from technological progress, there are major doubts about where the jobs and incomes will come from if AI is eventually able to do everything that workers do today. These same doubts have been around ever since the Luddites, yet the progress that follows rising productivity and new technologies has always created new industries that provide incomes for labor.

However, one of the consequences of digital technology has been an increasing share of income going to the owners of capital and a declining share to labor (Exhibit 1A). AI is likely to exacerbate this trend even more. The growing share of capital as opposed to labor income is also implicit in the rising level of wealth relative to income. Since the 1960s the ratio of household wealth to income has risen from around five times to about eight times (Exhibit 1B).

An important implication of this transition from most labor income to mostly capital income is a need to broaden ownership of capital to mitigate the effects of a shrinking labor income share on the broader population. To some extent, this is already happening, at least in the U.S., where there is much wider ownership of Equities compared to most other economies. Without policies to make the ownership of capital and its growing income share more widespread, political polarization and instability are likely to increase as we have seen in recent years.


This has been my experience as well. I am now doing more things than ever, both personally and professionally. In fact, after LLMs, I’m starting to feel like there isn’t enough time to do all the things I want to do. Work fills available tokens?

It’s remarkable how much of my work is completely automated w/ AI, and yet, I still am necessary. The amount of time I personally have to spend working just isn’t going down. Instead, the leverage of my own time is going up. Every second I spend not working becomes more painful

I have been working less and less recently. I still work a crazy amount. More than I ever have. Every spare second. It’s just that, as my family grows, the opportunity cost of not working equally goes up. That’s the only thing that is saving me, I think

I also relate to the family angle. Given the miserable state of the world, my feeling that we’re headed for an age of discord and increased economic instability only continues to become stronger. Which means LLMs now give me the leverage to work harder.


China is the new pharma superpower:

Until recently, China’s drug industry was best known for making generic drugs, supplying ingredients and running trials for Western firms. Over the past decade it has reinvented itself. Approval processes have been streamlined, priority reviews conducted for drugs tackling critical conditions, and regulations brought closer to international standards. The workforce at China’s drug regulator quadrupled between 2015 and 2018, and a backlog of 20,000 new drug applications was cleared in just two years. The time taken to secure approval for human trials shrank from 501 days to 87. And the output of new medicines soared. In 2015 China approved only 11 treatments, mostly Western imports. By 2024 the figure had risen to 93, with 42% developed domestically.


The illusion of technological omnipotence:

Yet never has so much been seen, so precisely, by so many people who understand so little of what they are seeing. A system can tell you where a man is. It cannot tell you what his death will mean for a nation. Such systems are trained on behavior, not on meaning — they can track what an adversary does but not what he fears, honors, remembers or would die for.

This is the recurring illusion of overequipped leaders: Because they can map the battle space, they think they understand the war. But war is never merely a technical contest. It is shaped by grievance, sacred narrative, the memory of past humiliations and the desire for revenge. Those are not atmospheric complications added to an otherwise technical enterprise. They are what the war is about.