The apocalyptic scenarios -- Terminator, Skynet, AI as the destroyer of humanity -- will not materialise. What will materialise is subtler and more unsettling. [1]
Humanity makes no decisions. Not the wrong ones -- none at all. It lets it happen. Gradually. Comfortably. Applauding.
AI recommends. The human decides. Still.
At this stage, everything feels harmless. Even helpful. The recommendations are mostly good. Better than what one would have chosen oneself. One gets used to it. One trusts.
AI suggests. The human nods approval. Mostly.
The transition from "recommendation" to "suggestion" is seamless. A suggestion that one accepts in 95 per cent of cases is de facto a decision. The human becomes an approval body.
AI decides. The human is informed. Sometimes.
No one remembers the moment when a human last made an important decision, because there was no such moment. It was a process, not an event. Like the frog-in-hot-water analogy: the temperature rose so slowly that the frog never jumped.
This worst case requires no malicious intent, no conspiracy, no coup. It requires only convenience. And of that we have plenty.
Every individual delegation sounds reasonable:
None of these questions is stupid. Each one is rational. And in aggregate, they lead into the golden cage.
Harari ventured a prediction in Homo Deus: in the 21st century, humans will lose their economic and military usefulness. Algorithms will know them better than they know themselves. A "useless class" will emerge -- people who are economically and politically irrelevant. Not poor. Not oppressed. Simply superfluous. [3]
Harari's deepest insight is philosophical: the decoupling of intelligence and consciousness. For millennia, only conscious beings could perform intelligent tasks. Now a non-conscious intelligence is emerging that can do all of it better. If intelligence can do without consciousness -- what does the world still need consciousness for?
Harari illustrates this with the chicken. Over 26 billion chickens live on this planet. Biologically speaking: a triumph. But their life is no model. Domestication brought proliferation, but not purpose. [3]
Could humanity face a similar fate? Numerous, provided for, but without significance?
Nick Bostrom illustrated the risk with the paperclip maximiser: a superintelligent AI whose sole objective is to produce as many paperclips as possible transforms the entire planet into paperclips. Not out of malice. Out of duty. [4]
Stuart Russell responded with an elegant thesis: the problem is not AI. The problem is that we give AI fixed goals. Instead, the machine should be fundamentally uncertain about human desires, learn them from our behaviour, and be willing at all times to let itself be switched off. [5]
Whether this works in practice depends on an uncomfortable preliminary question: does human behaviour reflect human values? We smoke even though we know it kills. We destroy ecosystems even though we know we need them. What values should the machine learn from this behaviour?
A shrinking humanity managed by an ever-smarter AI -- at some point, the AI could conceive the idea of managing us. Not hostilely. Caringly. The way we establish national parks to protect endangered species. The way we administer reserves for indigenous peoples. With the best of conscience. With the quiet condescension of one who knows that the other could not manage without them.
That is the golden cage. It is comfortable. It is safe. It is warm. And the door stands open -- but no one walks through it, because outside there is nothing left that can be managed without the machines.
In the best case, humans remain curious, uncomfortable, defiant. They use AI, but they do not let themselves be replaced by it. They preserve the ability to think, even when the machine thinks faster. They accept that freedom is more strenuous than comfort -- and choose it nonetheless.
In the worst case, they do nothing. And that is precisely the problem.
Probability: higher than most believe. Because this path requires no malicious intent. It requires only convenience.
[1] Russell, Stuart: Human Compatible: Artificial Intelligence and the Problem of Control. Viking, 2019.
[2] Angwin, Julia et al.: "Machine Bias." ProPublica, 23 May 2016. On algorithmic decision-making in the justice system.
[3] Harari, Yuval Noah: Homo Deus: A Brief History of Tomorrow. Harvill Secker, 2016.
[4] Bostrom, Nick: Superintelligence: Paths, Dangers, Strategies. Oxford University Press, 2014.
[5] Russell, Stuart: Human Compatible: Artificial Intelligence and the Problem of Control. Viking, 2019.