Dmytro Nikolaiev

The Underrated Advantage of AI

There's a common opinion that diversity of intelligence is a good thing. Not necessarily the dominant point in every setting, but it’s an intuition most of us share.

In classical ML, this shows up as ensembles: multiple models often beat a single one because their errors don't perfectly overlap, a concept known as wisdom of the crowd. In the current AI discourse, it fuels the multi-agent narrative: instead of "one huge AGI model", you'd rather have many agents with different roles, priors, specialties, and tools1. I don't challenge that idea, but something else I think is underrepresented:

One of the biggest practical advantages of machine intelligence over human intelligence at this point in time is the absence of fatigue.

In real work, the limiting factor isn't always raw intelligence. It's focus. Or patience, whatever you want to call it. Andrej Karpathy recently called it stamina2.

AI is never frustrated. Never embarrassed. Never emotionally attached to the approach. Never hesitant to start something it might not finish. Doesn't have preferences on what to work on. Doesn't have the cost of switching between tasks3.

Personalized AI tutors might not match the best human tutors, but they can answer the same question the 10th time with the same patience, and I think it's worth a lot; not to mention they do it instantly and cheaply.

It's not that AI "wants" to persist. It was just trained that way. There's no "want" or "don't want", it simply can't operate any other way than doing its best to solve the task4.

1000 Copies

Let's come back to the original statement: "copying an AGI model 1000 times won't be as useful as having 1000 different AGI models."

Again, I actually agree with the direction of that statement. But I think it's often driven by overrelying on human intuitions about human teams, and those intuitions smuggle in something important: humans don't just differ in skill - they differ in energy, attention, and perspective over time.

Imagine Alice works full-time on a project. Bob is only part-time. If Bob has better "taste" — stronger intuition for what matters — the value is obvious. He can look at Alice's current direction and say: "This is promising. That is a dead end."

But here's the thing: Bob doesn't need to be smarter than Alice for this to work. Even if they're equally skilled, Bob brings something Alice can't give herself - fresh eyes. He can see the project without being trapped inside it.

And that's where AI feels qualitatively different.

A copied model doesn't have tunnel vision and doesn't get tired. It can cheaply produce multiple independent passes on the same question without the same cost humans pay for context switching and burnout.

So even if "1000 copies of the same model" aren't as diverse as "1000 different models"5, they can still be surprisingly useful because they provide something humans struggle to scale: fresh attention.

Intelligence is composite and fuzzy. We mix together a lot of things: knowledge, reasoning, taste, creativity, persistence, attention, and energy, to name a few. Diversity of priors is one axis; attention and fatigue is another. In human teams, those axes are entangled - so we often attribute the team's advantage to "diversity" when part of it is simply "someone wasn't stuck"6.

AI's underrated advantage isn't just what it knows. It's that it can keep trying without running out of energy. Until, well, it runs out of actual energy :).


  1. I personally like the multi-agent narrative too; it's often an optimization story though. I like to think about that from several perspectives: performance (computation spread as in Chain-of-Thought), customizability (more moving pieces) and classical "microservice" argument (components are easier to manage, evaluate & think about in isolation). And there's a marketing argument too

  2. Attention is another option ironically.

  3. It has all of that up to some point, but not at scale humans do.

  4. Though models do sometimes give up on long-horizon tasks, it does improve with newer models and better long-context management.

  5. One could also argue that different LLMs are not so different since they're trained on ~same data with ~same architecture

  6. This is why intelligence(Alice) + intelligence(Bob) ≠ intelligence(Alice + Bob). It can be lower if they coordinate poorly, or higher if they truly complement each other; same for bigger teams.