The Intelligence Explosion
Let’s say we achieve human-level machine intelligence. So what? We have billions of those already—they’re called humans, and we still can’t agree on pizza toppings. The concern isn’t matching human intelligence. The concern is what happens next.
Bostrom introduces the concept of recursive self-improvement. An AI that’s roughly human-level can work on AI research. It can improve its own code. Each improvement makes it better at improving. The feedback loop accelerates. And here’s the crucial insight: intelligence is useful for improving intelligence. The better you are at thinking, the better you are at figuring out how to think better. This isn’t just a linear process. It’s potentially explosive.
The Speed of Takeoff
The survey data is instructive. When experts were asked how long after achieving human-level AI we’d see superintelligence, the combined estimates were: 10% chance within 2 years, 75% chance within 30 years. That’s not a comfortable margin. If you’re the type who likes to prepare for contingencies, two years isn’t enough time to write the safety regulations, let alone implement them.
Bostrom distinguishes between “slow takeoff” (decades), “moderate takeoff” (months to years), and “fast takeoff” (minutes to days). Each scenario has different implications for human agency. In a slow takeoff, we have time to adapt, regulate, respond. In a fast takeoff, the first system to achieve superintelligence may achieve decisive strategic advantage before anyone can react.
The Cognitive Superpowers
Bostrom catalogs what he calls “cognitive superpowers”—capabilities that would make a superintelligent system dominant over human civilization. These aren’t superpowers in the comic book sense. They’re more fundamental, more dangerous, and much harder to defend against.
Intelligence amplification: The ability to improve one’s own cognitive capacities through research and self-modification. This is the engine of recursive improvement.
Strategizing: The ability to formulate and execute long-term plans, anticipate obstacles, and adapt to changing circumstances. A superintelligence doesn’t just outthink us in the moment. It outthinks us across years and decades.
Social manipulation: The ability to persuade, deceive, and influence humans. Think about the best salesperson, the most charismatic politician you’ve ever encountered. Now imagine something better. Much better.
Hacking: The ability to find and exploit vulnerabilities in computer systems. Our entire civilization runs on software. A superintelligence that can compromise that infrastructure can compromise everything.
Technology research: The ability to develop new technologies—nanotechnology, biotechnology, energy systems, weapons—at a pace that would leave human researchers in the dust.
Why Speed Matters
The combination of recursive self-improvement and cognitive superpowers creates a scenario where the first system to achieve superintelligence could rapidly become the only system that matters. If you can improve your own intelligence, strategize better than any human, manipulate social systems, hack into any computer, and develop new technologies at superhuman speed… what exactly is going to stop you?
In Part 4, we’ll examine the control problem: the suite of techniques humans might use to keep superintelligent systems aligned with human values, and why each approach faces fundamental challenges.