Whoever controls AGI potentially controls the global economy, cyber security infrastructure and military superiority.
Companies developing systems with general intelligence (AGI) must be subject to an international supervisory authority -- similar to the IAEA for nuclear technology. No AGI without democratic legitimacy and international oversight.
Artificial General Intelligence (AGI) denotes an AI system that operates at human or superhuman level across all cognitive domains -- not just in a specialised task (such as chess or text generation), but in every intellectual activity [1].
The leading AI labs -- OpenAI, Google DeepMind, Anthropic -- publicly declare that AGI is their goal. Sam Altman (OpenAI) spoke in 2024 of "a few years" [2]. Dario Amodei (Anthropic) described the development as "a matter of scaling" [3]. Computing power doubles every two years. Algorithms are improving faster. And since January 2026, AI is increasingly improving itself.
The International Atomic Energy Agency (IAEA) was founded in 1957 to promote the civilian use of nuclear energy while simultaneously preventing the proliferation of nuclear weapons. The model is based on three pillars [4]:
An international AI supervisory authority could function along the same lines:
| IAEA | AI Oversight |
|---|---|
| Nuclear material | Computing capacity above threshold |
| Inspections of facilities | Audits of training runs |
| Safeguards against weapons development | Safety tests before deployment |
| Reporting obligations | Transparency about capabilities and risks |
OpenAI was founded in 2015 as a non-profit organisation -- with the stated goal of developing AGI "for the benefit of all humanity". Ten years later, OpenAI is a for-profit company with a valuation of over 150 billion dollars [5]. History shows: self-imposed commitments do not hold when billions are at stake.
Swiss initiative: Switzerland tables a proposal for an international AI supervisory authority at the UN General Assembly.
Headquarters in Geneva: The authority should be based in Geneva -- alongside the ICRC, WHO and WTO.
Threshold: Any AI system trained with more than a defined computing capacity (e.g. 10^26 FLOP) is subject to oversight.
Safety tests: Before the release of a frontier model, independent red-team tests must be conducted and the results published.
Democratic legitimacy: The supervisory authority reports to a body of elected representatives, not solely to government delegates.
The greatest challenge is geopolitical: the USA and China -- the two leading AI nations -- have little incentive to submit to international control. But exactly the same was true of nuclear technology in the 1950s. The IAEA was established nonetheless -- not because the great powers were cooperative, but because the alternative (uncontrolled proliferation) was worse for everyone [6].
Swiss neutrality is not a disadvantage in this context but a trump card. A neutral country that does not itself develop AGI but conducts excellent research (ETH, EPFL, IDSIA) is the ideal host for an international supervisory authority.
[1] Bostrom, Nick: Superintelligence: Paths, Dangers, Strategies. Oxford University Press, 2014.
[2] Altman, Sam: Public statements on AGI timeline, 2024. Various interviews.
[3] Amodei, Dario: Machines of Loving Grace. Anthropic Blog, October 2024.
[4] IAEA: Statute of the International Atomic Energy Agency, 1957.
[5] Financial Times, Wall Street Journal: Reports on OpenAI valuation, 2024/2025.