A true Artificial General Intelligence -- a system that operates at human or superhuman level across all cognitive domains -- will likely exist by 2035. It will not be publicly accessible. It will be under state or quasi-state control. [1]
The power this creates will geopolitically transform whichever nation or corporation masters it first. For Switzerland and Europe, this is the most existential geopolitical question since the Cold War.
Computing power doubles every two years. Algorithms are also improving. And since early 2026, AI has been improving itself -- a feedback loop that accelerates development exponentially. The question is not whether, but when. [2]
The decisive question is: who wins the race?
| Actor | Strengths | Risks |
|---|---|---|
| USA (OpenAI, Google, Anthropic, Meta) | Largest models, most computing power, venture capital | Corporate power, little regulation |
| China (Baidu, Alibaba, state programmes) | State coordination, vast data volumes, no data protection concerns | Authoritarian control, censorship in training data |
| Europe (Mistral, Aleph Alpha, DeepMind London) | Regulatory expertise (EU AI Act), strong fundamental research | Fragmentation, too little venture capital |
Whoever masters AGI first potentially controls:
This is not science fiction. The US Department of Defense classified AI in 2024 as the second most important technology for national security. China is investing billions in a state-coordinated manner. [3]
The race for AGI is also a contest between systems.
Advantage of democracies: Open research, peer review, ethical guardrails, capacity for correction through public debate. The world's best AI researchers work disproportionately in democratic countries.
Advantage of autocracies: Faster decision-making, no data protection concerns, state-coordinated pooling of resources. China can implement in months what takes Europe years.
The historical comparison to the Cold War is not an exaggeration. The atomic bomb defined the world order. AGI will do so again -- with the difference that the effects will not be limited to the military sphere. [4]
Swiss neutrality was a protective wall in the 20th century. In a world of physical conflicts and territorial borders, a small country could navigate between power blocs by staying out of them.
In the age of AGI, neutrality could degenerate into irrelevance. Why?
AGI knows no borders. A system developed in the USA or China has a global impact -- on financial markets, on critical infrastructure, on information flows.
Neutrality does not protect against dependence. If Switzerland obtains its entire AI infrastructure from US corporations, it is not neutral -- it is dependent.
The AGI question demands taking sides. Should AGI be under democratic control? Or does one accept that authoritarian regimes set the rules? Neutrality on this question is not a virtue but an abdication of responsibility.
Despite the risks, Switzerland's starting position is better than many believe:
International AGI control: Companies developing systems with general intelligence must be subject to an international supervisory authority -- similar to the IAEA for nuclear technology. No AGI without democratic legitimacy and international oversight. Geneva would be the natural location for such an authority. [8]
Ban on autonomous weapons: Switzerland must use its role as a neutral mediating country and actively advocate for an international moratorium on autonomous lethal AI weapon systems.
National AI sovereignty: A national AI competence centre linking ETH, EPFL, IDSIA and the universities of applied sciences with industry. At least 500 million francs in public funding. Because whoever imports AI exports value creation. Whoever exports AI imports prosperity.
[1] Bostrom, Nick: Superintelligence: Paths, Dangers, Strategies. Oxford University Press, 2014.
[2] Amodei, Dario: Machines of Loving Grace. Anthropic Blog, October 2024.
[3] US Department of Defense: National Defense Strategy 2024. AI as a strategic technology.
[5] QS World University Rankings 2025: ETH Zurich rank 7, EPFL rank 12.
[6] Hochreiter, S. / Schmidhuber, J.: Long Short-Term Memory. In: Neural Computation 9(8), 1997.
[7] International Monetary Fund IMF: AI Preparedness Index 2024. Switzerland rank 3.
[8] UN Secretary-General's High-Level Advisory Body on Artificial Intelligence: Governing AI for Humanity. Interim Report, 2024.