Forecast: A genuine Artificial General Intelligence -- a system that operates at human or superhuman level across all cognitive domains -- will exist by 2035. It will not be publicly accessible.
AGI (Artificial General Intelligence) denotes an AI system that is not limited to a single domain -- such as chess, text generation or image analysis -- but operates at human or superhuman level across all cognitive tasks. The transition from today's language models to AGI is qualitative: today's models are specialists with broad knowledge. AGI would be a generalist with deep understanding.
Three developments are converging:
Computing power: Doubling every two years (Moore's Law and its successors). The hardware for AGI is no longer a theoretical question but an industrial one [1].
Algorithms: The Transformer architecture (2017) laid the foundation [2]. Since then, architectures have been improving in ever-shorter cycles. Since January 2026, AI has been improving itself -- rewriting its own code, testing it, discarding what does not work.
Data: The entirety of digitised human knowledge is available as training material. Synthetic data extend this pool infinitely.
Forecast: AGI is a question of when, not if. The only question is whether democracies or autocracies win the race.
The power that AGI generates will geopolitically transform whichever nation or corporation first masters it. Whoever controls AGI potentially controls:
Nick Bostrom warned in 2014 in Superintelligence: a superintelligent AI whose sole objective is to produce as many paper clips as possible will convert the entire planet into paper clips. Not out of malice. Out of duty [3].
Stuart Russell responded in 2019 in Human Compatible: the problem is not the AI. The problem is that we give AI fixed objectives. Instead, the machine should be fundamentally uncertain about human desires and always ready to be switched off. Russell calls this provably beneficial AI [4].
For Switzerland, this is the most existential geopolitical question since the Cold War. Swiss neutrality, a bulwark in the 20th century, could degenerate into irrelevance in the age of AGI.
But the starting position is better than expected:
Systems with general intelligence must be subject to an international supervisory body -- similar to the IAEA for nuclear technology. No AGI without democratic legitimation and international control. Geneva, as the seat of numerous international organisations, would be the natural location for such a body.
The question "What does AI do to us?" is the wrong question. The right one is: "Who controls AI -- and in whose interest?"
[1] Sevilla, Jaime et al.: Compute Trends Across Three Eras of Machine Learning. arXiv:2202.05924, 2022.
[2] Vaswani, Ashish et al.: Attention Is All You Need. NeurIPS, 2017.
[3] Bostrom, Nick: Superintelligence: Paths, Dangers, Strategies. Oxford University Press, 2014.
[4] Russell, Stuart: Human Compatible: Artificial Intelligence and the Problem of Control. Viking, 2019.
[5] Hochreiter, Sepp / Schmidhuber, Juergen: Long Short-Term Memory. Neural Computation 9(8), 1997.
[6] IMF (International Monetary Fund): AI Preparedness Index. 2024.