Let’s be fair to the skeptics. Bostrom’s arguments aren’t universally accepted, and there are legitimate reasons to question the whole framework. Before we conclude this series, let’s walk through the main objections.

Four Major Objections

Objection 1: Intelligence isn’t what we think it is. Maybe there’s no such thing as general intelligence—only a collection of specialized abilities that don’t transfer between domains. In this view, superintelligence is impossible because there’s no scale to be super on.

Objection 2: Physical limits bite harder than we think. Maybe the brain is already near-optimal for its size and energy budget. Maybe there are thermodynamic or quantum limits on computation that prevent the kind of speedups Bostrom’s scenarios require.

Objection 3: Societal collapse gets us first. Maybe we don’t make it to superintelligence because we destroy ourselves through nuclear war, bioweapons, climate catastrophe, or some other extinction-level threat.

Objection 4: AI development will be gradual and controllable. Maybe the intelligence explosion is a fantasy. Maybe improvements in AI will be incremental, giving us time to adapt, regulate, and integrate. This is arguably the most hopeful objection, but it’s also the one most clearly contradicted by the current pace of AI progress.

What Now?

So where does this leave us? Bostrom’s book is ultimately an exercise in perspective-setting. He’s not trying to convince you that superintelligence is definitely coming in 2040, or that we’re definitely all going to die when it arrives. He’s trying to convince you that the possibility is real enough, the stakes are high enough, and the uncertainty is large enough that we should be thinking seriously about these questions.

The expert surveys suggest that human-level AI has a “fairly sizeable chance of being developed by mid-century, and a non-trivial chance of being developed considerably sooner or much later.” That’s a wide distribution. It covers scenarios where we have decades to prepare and scenarios where we have years.

The key insight isn’t about prediction. It’s about preparation. If there’s even a small probability of a superintelligence transition in the coming decades—and there is—then the expected value of getting it right is astronomical. The cosmic endowment dwarfs any other consideration.

The Last Invention

We’re in a peculiar moment in history. We’re building systems that might become our successors, and we’re doing it without a clear understanding of what that means or how to control it. Every day, more compute is dedicated to AI research. Every month, new capabilities emerge that would have seemed impossible a year before. The curve is bending upward, and we’re all just hoping it doesn’t break.

Bostrom’s contribution is to give us the conceptual framework to think about these questions clearly. The paths to superintelligence. The dynamics of takeoff. The nature of the control problem. The scale of the stakes. He doesn’t provide answers—the answers don’t exist yet—but he provides the right questions. And in a domain this important, asking the right questions is half the battle.

The last invention. The final technology. The thing that makes everything else obsolete. We’re building it now, or something like it. The only question is whether we’ll be ready when it arrives.

References & Further Reading

[1] Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press.

[2] Yudkowsky, E. (2008). Artificial Intelligence as a Positive and Negative Factor in Global Risk. Global Catastrophic Risks, Oxford University Press.

[3] Grace, K. et al. (2018). When will AI exceed human performance? Journal of Artificial Intelligence Research, 62, 729-754.

[4] Silver, D. et al. (2016). Mastering the game of Go with deep neural networks. Nature, 529(7587), 484-489.


SERIES COMPLETE

Previous: Part 4: How to Contain a God

Back to Articles