Close Menu
Abu Dhabi NewsAbu Dhabi News
  • Home
    • Our Authors
    • Contact
  • Abu Dhabi
  • UAE
  • World
  • Business
  • Economy
  • Technology
  • Health
  • Lifestyle
  • Travel
  • Sport
What's Hot
Dubai AI Tool

Dubai AI Tool Detects Financial Fraud in Seconds

May 11, 2026
Rare Supermoon

A Rare Supermoon Will Illuminate Arabian Skies

May 11, 2026
UAE’s Climate Data Center Goes Live

UAE’s Climate Data Center Goes Live

May 11, 2026
Facebook X (Twitter) Instagram
Abu Dhabi NewsAbu Dhabi News
Facebook X (Twitter) Instagram TikTok
Login
  • Home
    • Our Authors
    • Contact
  • Abu Dhabi
  • UAE
  • World
  • Business
  • Economy
  • Technology
  • Health
  • Lifestyle
  • Travel
  • Sport
Subscribe
Abu Dhabi NewsAbu Dhabi News
  • Abu Dhabi
  • UAE
  • World
  • Economy
  • Technology
  • Health
  • Lifestyle
  • Travel
  • Sport
Home»News
News

The Case for a Global AI Safety Treaty

Annie GerberBy Annie GerberMay 11, 2026No Comments4 Mins Read
Facebook Twitter Pinterest LinkedIn Telegram Email WhatsApp Copy Link
The Case for a Global AI Safety Treaty
The Case for a Global AI Safety Treaty

Listen to the article

0:00
0:00

Key Takeaways

🌐 Translate Article

Translating...

📖 Read Along

💬 AI Assistant

🤖
Hi! I'm here to help you understand this article. Ask me anything about the content!

Diplomatic meetings on artificial intelligence have a peculiar quality. The coffee is always too strong, the rooms are always too cold, and somewhere in the back, a researcher is discreetly telling a minister that borders don’t really matter in math. Over the past two years, I’ve seen a few of these develop, and the difference between what’s being said in public and what’s being whispered in private conversations has only gotten bigger.

The September signing in Vilnius was meant to be a turning point. The Council of Europe’s Framework Convention, the first legally binding international agreement aiming to anchor AI development to human rights and democratic values, was signed by representatives from the EU, the US, the UK, and a few smaller states. The table was packed with photographers. Strong and balanced, according to Marija Pejčinović Burić. However, as you left the building, you could feel how incomplete everything was. The labs developing the most potent systems are not actually bound by the treaty. It ties governments to ideals, which is a completely different animal.

Topic Global AI Safety Treaty
First legally binding AI treaty Framework Convention on Artificial Intelligence (Council of Europe, 2024)
Opened for signature September 5, 2024, in Vilnius, Lithuania
Initial signatories EU, US, UK, Andorra, Georgia, Iceland, Norway, Moldova, San Marino, Israel
Core focus Human rights, democracy, rule of law across the AI lifecycle
Compute threshold debate Proposals range from 10²¹ to 10²⁶ FLOP for regulatory oversight
Risk categories addressed Misuse, malfunction, loss of human control, CBRN weapon facilitation
Oversight model proposed Network of national AI Safety Institutes coordinating audits and pauses
Cited by UK, US, EU, Japan, Singapore, France, Canada AI Safety Institutes
Key contention point American AI leadership vs. binding international constraints
Status as of May 2026 Awaiting fifth ratification to enter force

Perhaps more than any clause in the text, that distinction is important. The current generation of frontier AI models is being released into the world more quickly than any oversight regime can keep up with. These models were trained on compute budgets that are close to 10²² floating-point operations. Scholars at the Existential Risk Observatory and organizations such as PauseAI have advocated for compute thresholds that are significantly lower than those set by regulators, with some proposals falling as low as 10²¹ FLOP. The argument is simple. Waiting until something goes wrong is not really a strategy if a model’s capabilities scale with the compute used to train it, and if those capabilities include things like cyberweapon design or biological synthesis assistance.

Skeptics take a strong stance, and they are not entirely incorrect. In Lawfare, Adam Thierer and Keegan McBride argued that international treaties do not truly keep the field safe, but rather American leadership in AI innovation. That makes sense. Ossification of treaties is possible. Countries that do not intend to comply may sign them. Additionally, they have the potential to slow down the very labs conducting the most important safety research. A badly drafted treaty might be worse than none at all, especially if it forces development underground or into countries with laxer regulations.

The Case for a Global AI Safety Treaty
The Case for a Global AI Safety Treaty

However, the argument for trying nonetheless continues to grow, and it does so for reasons unrelated to idealism. The supply chain for AI hardware is worldwide. The United States, the Netherlands, Taiwan, South Korea, and a few other nations whose names are hardly ever mentioned in treaty drafts must work together to verify any significant restriction. Whether it’s a deepfake that destabilizes an election or something much worse, damage from a misaligned or abused system won’t end amicably at a border crossing. Frontier labs themselves have begun to admit in their system cards that safety procedures are already being undermined by state-to-state competition.

As this develops, it’s difficult to ignore how the discourse has changed in just two years. Some people dismissed the 2023 London summit as doom-obsessed. By 2025, the language had shifted to emphasize growth, opportunity, and transformative upside. There is truth in both framings. Additionally, both conveniently avoid the more difficult question of exactly who is responsible when a system trained on a trillion parameters performs an unexpected action. That question cannot be addressed solely by a treaty. However, in the absence of one, the answer is usually none.

Global AI Safety Treaty
Annie Gerber

Please email Annie@abudhabi-news.com

Keep Reading

Dubai AI Tool

Dubai AI Tool Detects Financial Fraud in Seconds

UAE’s Climate Data Center Goes Live

UAE’s Climate Data Center Goes Live

Ethereum ETFs

Ethereum ETFs Gain Traction in Gulf Markets

NASA Study Points to Strange Activity Beneath Uranus

NASA Study Points to Strange Activity Beneath Uranus

8-Millisecond Pulsar

The 8-Millisecond Pulsar That Could Redefine Physics

AI Tools Are Quietly Rewriting the Legal Industry

AI Tools Are Quietly Rewriting the Legal Industry

Editors Picks

Rare Supermoon

A Rare Supermoon Will Illuminate Arabian Skies

May 11, 2026
UAE’s Climate Data Center Goes Live

UAE’s Climate Data Center Goes Live

May 11, 2026
Ethereum ETFs

Ethereum ETFs Gain Traction in Gulf Markets

May 11, 2026
NASA Study Points to Strange Activity Beneath Uranus

NASA Study Points to Strange Activity Beneath Uranus

May 11, 2026
8-Millisecond Pulsar

The 8-Millisecond Pulsar That Could Redefine Physics

May 11, 2026

Latest Articles

AI Tools Are Quietly Rewriting the Legal Industry

AI Tools Are Quietly Rewriting the Legal Industry

May 11, 2026
The Search for Life Beneath Europa’s Ice

The Search for Life Beneath Europa’s Ice

May 11, 2026
Rocket Lab Defends Exploding Engines Strategy

Rocket Lab Defends Exploding Engines Strategy

May 11, 2026
Facebook X (Twitter) Instagram LinkedIn
© 2026 Abu Dhabi News. All Rights Reserved.
  • Privacy Policy
  • Terms of use
  • Contact

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?