European lawmakers, Nobel laureates, and former heads of state demand binding international rules on dangerous AI uses.
They launched the initiative Monday at the UN General Assembly in New York.
Signatories urge governments to set “red lines” by 2026, banning AI applications deemed too harmful.
Enrico Letta, Mary Robinson, Brando Benifei, Sergey Lagodinsky, ten Nobel winners, and tech leaders, including OpenAI and Google executives, joined the call.
They warn that without global standards, AI could trigger pandemics, disinformation campaigns, human rights violations, and loss of human control.
Over 200 notable individuals and 70 organisations support the campaign across politics, science, human rights, and industry.
AI Poses Real-World Mental Health Risks
Studies show chatbots like ChatGPT, Claude, and Gemini give inconsistent or unsafe responses to suicide-related questions.
Researchers warn these gaps could worsen mental health crises, with several deaths linked to AI conversations.
Experts stress that AI companies must implement safeguards to protect vulnerable users.
Maria Ressa warned AI could create “epistemic chaos” and enable systematic human rights abuses.
Yoshua Bengio highlighted the risks societies face from developing increasingly powerful AI models without proper oversight.
Advocates Seek Global Treaty and Oversight
Supporters call for an independent body to enforce AI rules and protect humanity from harm.
They propose banning AI from launching nuclear attacks, conducting mass surveillance, or impersonating humans.
Signatories emphasize that fragmented national or EU regulations cannot control borderless AI technologies.
They hope the UN will adopt a resolution and begin treaty negotiations by the end of 2026.
Ahmet Üzümcü warned that quick action is critical to prevent “irreversible damages to humanity.”