Jump to content

International Association for Safe and Ethical AI

From Wikipedia, the free encyclopedia
International Association for Safe and Ethical AI
AbbreviationIASEAI
Formation2024
TypeNon-profit organization
PurposePromotion of safe and ethical development and governance of artificial intelligence
HeadquartersParis, France
Region served
International
Official language
English
Key people
  • Stuart Russell (President)
  • Mark Nitzberg (Interim Executive Director)
  • Amir Banifatemi (Board Member)
Websitewww.iaseai.org

The International Association for Safe and Ethical AI (IASEAI) is a non-profit organization. The organization's stated mission is to address the risks and opportunities associated with advances in artificial intelligence (AI). IASEAI was founded to promote safety and ethics in AI development and deployment. The organization focuses on shaping policy, supporting research, and encouraging a global community of experts and stakeholders.[1]

Activities

[edit]

IASEAI is involved in policy development, research and awards, education, and community-building. The organization develops policy analyses related to standards, regulation, international cooperation, and research funding, which are published as position papers. There is also support to research into both the technical and sociotechnical aspects of AI safety and ethics.

Inaugural conference

[edit]

The inaugural IASEAI conference, IASEAI '25, was held on February 6–7, 2025 in Paris, prior to the Paris AI Action Summit. The event brought together experts from academia, civil society, industry, and government to discuss developments in AI safety and ethics. The program featured over 40 talks, keynote addresses, and specialized tracks on global coordination, safety engineering, disinformation, interpretability, and AI alignment.[2][3][4]

Notable participants included:

The conference also included presentations from early-career researchers and practitioners, such as Aida Brankovic of the Australian e-Health Research Centre (AEHRC). Brankovic presented guidelines developed to mitigate ethical risks in clinical decision support AI systems.[5] Other participants included Georgios Chalkiadakis of the Technical University of Crete.[6]

Topics addressed included reinforcement learning from human feedback (RLHF), AI governance, regulatory frameworks, agentic AI, misinformation, and transparency. Geoffrey Hinton's keynote, What Is Understanding?, explored how AI systems process meaning. Gillian Hadfield called for anticipatory legal capacity, and Evi Micha introduced a framework for aligning AI using “linear social choice.”[3]

The conference was noted for its emphasis on AI safety in contrast to the broader Paris AI Action Summit, which some observers said focused more on economic and geopolitical aspects of AI. Attendee Paul Salmon, a professor of human factors, criticized the broader summit for sidelining safety issues in favor of commercial narratives and outlined five “comforting myths” that obscure public understanding of AI risks.[7]

At the conclusion of the event, IASEAI issued a ten-point Call to Action for lawmakers, researchers, and civil society, recommending global cooperation, binding safety standards, and expanded public research funding.[8]

Board and committee

[edit]
  • Amir Banifatemi – Member of the Board
  • Mark Nitzberg – Interim Executive Director, Secretary-Treasurer, Member of the Board
  • Stuart Russell – Member of the Board; Professor of Computer Science, University of California, Berkeley

The steering committee includes:

  • Yoshua Bengio – Université de Montréal, Mila
  • Kate Crawford – University of Southern California, Microsoft Research
  • Tino Cuéllar – Carnegie Endowment for International Peace
  • Gillian Hadfield – Johns Hopkins University
  • Eric Horvitz – Microsoft
  • Will Marshall – Planet Labs
  • Jason Matheny – RAND Corporation
  • Alondra Nelson – Institute for Advanced Study
  • Aza Raskin – Center for Humane Technology
  • Francesca Rossi – IBM
  • Bart Selman – Cornell University
  • Max Tegmark – Massachusetts Institute of Technology
  • Andy Yao – Tsinghua University
  • Zhang Ya-Qin – Tsinghua University

References

[edit]
  1. ^ "International Association for Safe & Ethical AI — IASEAI". www.iaseai.org.
  2. ^ Purcell, Brandon (February 27, 2025). "Lessons From The Inaugural Conference Of The International Association For Safe And Ethical AI".
  3. ^ a b "The path to safe, ethical AI: SRI highlights from the 2025 IASEAI conference in Paris". Schwartz Reisman Institute.
  4. ^ "ICT4Peace at The Inaugural Conference of the International Association for Safe and Ethical AI in Paris".
  5. ^ Gilbert, Morgan (March 11, 2025). "Aida shares her ethical AI expertise at IASEAI conference".
  6. ^ "TUC at the inaugural Conference of the International Association for Safe and Ethical AI". www.tuc.gr. January 31, 2025.
  7. ^ Salmon, Paul (February 12, 2025). "Nobody wants to talk about AI safety. Instead they cling to 5 comforting myths". The Conversation.
  8. ^ "IASEAI'25 Official Statement". www.iaseai.org.