Gemini’s web chat app is unusable and, to be honest, despite all the hype and hoopla, it kind of sucks. But surprisingly, my experience using their APIs has been really, really good, even compared to the likes of Claude and OpenAI in a lot of cases.

For a lot of these high-volume tasks, I’ve been heavily using Gemini 3 Flash Preview for Project Akshara to digitize books at scale, and Flash has been phenomenally reliable. It has turned out to be a wonderful workhorse model.

For the longest time, I could never really make Gemini 3 and 3.1 Pro work because of their surprisingly low rate limits. But I was trying them on a few translation tasks, and again, they were surprisingly much, much better. In one particular case, I was working on modernizing Francis Bacon’s essays, and it did much better than Sonnet. And I’m saying this as a huge fan of Claude.

So, if you’re looking for a reliable workhorse model for high-volume tasks like processing information, cleaning it up, organizing it, extracting text from PDFs and images, and so on, my experience with Gemini 3 Flash Preview has been really, really good. Oh, and it helps that Gemini Flash 3 is dirt cheap.