Bringing together senior scientists from around the world to mitigate extreme risks from AI
Past Dialogues
IDAIS-Venice
Western and Chinese scientists: AI safety a “global public good”, global cooperation urgently needed
IDAIS-Beijing
International scientists meet in Beijing to discuss extreme AI risks, recommend red lines for AI development and international cooperation.
IDAIS-Oxford
In the Inaugural IDAIS, International Scientists Call for Global Action on AI Safety.
IDAIS-Venice, 2024
Western and Chinese scientists: AI safety a “global public good”, global cooperation urgently needed.
VENICE, ITALY - Leading global artificial intelligence (AI) scientists gathered in Venice in September where they issued a call urging governments and researchers to collaborate to address AI risks. Computer scientists including Turing Award winners Yoshua Bengio and Andrew Yao, as well as UC Berkeley professor Stuart Russell, OBE and Zhang Ya-Qin, Chair Professor at Tsinghua University, convened for the third in a series of International Dialogues on AI Safety (IDAIS), hosted by the Safe AI Forum (SAIF) in collaboration with the Berggruen Institute.
The event took place over three days at the Casa dei Tre Oci in Venice and focused on safety efforts around so-called artificial general intelligence. The first day involved a series of discussions centered around the nature of AI risks and the variety of strategies required to counter them. Session topics included early warning thresholds, AI Safety Institutes, verification and international governance mechanisms.
These discussions became the basis of a consensus statement signed by the scientists, centered around the idea that AI safety is a “global public good”, suggesting that states carve out AI safety as a cooperative area of academic and technical activity. The statement calls for three areas of policy and research. First, they advocate for “Emergency Preparedness Agreements and Institutions”, a set of global authorities and agreements which could coordinate on AI risk. Then, they suggest developing “Safety Assurance Frameworks”, a more comprehensive set of safety guarantees for advanced AI systems. Finally, they advocate for more AI Safety funding and research into verification systems to ensure that safety claims made by developers or states are trustworthy. The full statement can be read below.
On the second day, scientists were joined by a group of policymakers, former President of Ireland Mary Robinson and other experts. The scientists emphasized the urgency of employing these proposals given the rapid pace of AI development. The statement was presented directly to the policymakers, and the group strategized about how the international community may work together to accomplish these goals.
Statement
Western and Chinese scientists: AI safety a ‘global public good’, global cooperation urgently needed”
AI安全国际对话威尼斯共识 "
Signatories
Signatories
Yoshua Bengio
Professor at the Université de Montréal;
the Founder and Scientific Director of Mila
Quebec AI Institute,
Chair of the International Scientific Report on the Safety of Advanced AI
Turing Award Winner
Andrew Yao
Dean of the Institute for Interdisciplinary Information Sciences and Dean of the College of Artificial Intelligence at Tsinghua University
Turing Award Winner
Geoffrey Hinton
Chief Scientific Advisor
University of Toronto Vector Institute
Turing Award Winner
Zhang Ya-Qin 张亚勤
Director of the Tsinghua Institute for AI Industry Research (AIR)
Former President of Baidu
Stuart Russell
Professor and Smith-Zadeh Chair
in Engineering at the University of California, Berkeley
Founder of Center for Human-Compatible Artificial Intelligence (CHAI)
at the University of California, Berkeley
Gillian Hadfield
Incoming Professor at the School of Government
and Policy and School of Engineering of the Johns Hopkins University
Professor of Law and Strategic Management at the University of Toronto
Mary Robinson
Former President of Ireland, Chair of the Elders
Mariano-Florentino (Tino) Cuéllar
Former California Supreme Court Justice and Member, National Academy of Sciences Committee on the Ethics and Governance of Computing Research and Its Applications
Fu Ying 傅莹
Zeng Yi 曾毅
Director of the International Research Center for AI Ethics and Governance
and Deputy Director of the Research Center for Brain-inspired Intelligence
Institute of Automation,
Chinese Academy of Sciences (CAS),
Member of United Nations High-level Advisory Body on AI
Member of UNESCO High-level Expert Group on Implementation of AI Ethics
He Tianxing 贺天行
Incoming Assistant Professor at Tsinghua University
Lu Chaochao 陆超超
Kwok Yan Lam
Associate Vice President (Strategy and Partnerships)
at Nanyang Technological University (NTU), Singapore,
Executive Director of the Digital Trust Centre (DTC)
designated as Singapore’s AI Safety Institute,
Professor, School of Computer Science and Engineering
Nanyang Technological University (NTU), Singapore
Tang Jie 唐杰
Chief Scientist of Zhipu
Professor of Computer Science at Tsinghua University
Dawn Nakagawa
President of the Berggruen Institute
Benjamin Prud'homme
Vice-President of Policy, Safety and Global Affairs at Mila
Québec AI Institute
Robert Trager
Co-Director of the Oxford Martin AI Governance Initiative
International Governance Lead at the Centre for the Governance of AI
Yang Yaodong 杨耀东
Assistant Professor at the Institute for AI
Peking University
Director of the Center for Large Model Safety
Beijing Academy of AI
Head of the PKU Alignment and Interaction Research Lab (PAIR)
Yang Chao 杨超
Wang Zhongyuan
Director, Beijing Academy of Artificial Intelligence (BAAI)
Zhang HongJiang 张宏江
Founding Chairman of the Beijing Academy of Artificial Intelligence (BAAI)
Sam Bowman
Member of Technical Staff and Co-Director for Alignment Science, Anthropic
Associate Professor of Data Science, Linguistics, and Computer Science, New York University
Dan Baer
Sebastian Hallensleben
Chair of CEN-CENELEC JTC 21
where European AI standards to underpin EU regulation are being developed
Head of Digitalisation and Artificial Intelligence
at VDE Association for Electrical
Electronic and Information Technologies,
Member of the Expert Advisory Board of the EU
Ong Chen Hui
Assistant Chief Executive
of Business and Technology Group at
the Infocomm and Media Development Authority (IMDA), Singapore.
Fynn Heide
Executive Director, Safe AI Forum
Conor McGurk
Managing Director, Safe AI Forum
Saad Siddiqui
Safe AI Forum
Isabella Duan
Safe AI Forum
Adam Gleave
Founder and CEO, FAR AI
Xin Chen
PhD Student, ETH Zurich
IDAIS-Beijing, 2024
International scientists meet in Beijing to discuss extreme AI risks, recommend red lines for AI development and international cooperation.
Leading global AI scientists convened in Beijing for the second International Dialogue on AI Safety IDAIS-Beijing, hosted by the Safe AI Forum in collaboration with the Beijing Academy of AI BAAI. During the event, computer scientists including Turing Award winners Yoshua Bengio, Andrew Yao, and Geoffrey Hinton and the Founding current BAAI Chairmans HongJiang Zhang and Huang Tiejun worked with governance experts such as Tsinghua professor Xue Lan and University of Toronto professor Gillian Hadfield to chart a path forward on international AI safety.
The event took place over two days at the Aman Summer Palace in Beijing and focused on safely navigating the development of Artificial General Intelligence AGI systems. The first day involved technical and governance discussions of AI risk, where scientists shared research agendas in AI safety but also potential regulatory regimes. The discussion culminated in a consensus statement recommending a set of red lines for AI development to prevent catastrophic and existential risks from AI. In the consensus statement, the scientists advocate for prohibiting development of AI systems that can autonomously replicate, improve, seek power or deceive their creators, or those that enable building weapons of mass destruction and conducting cyberattacks. Additionally, the statement laid out a series of measures to be taken to ensure those lines are never crossed. The full statement can be read below.
On the second day, the scientists met with senior Chinese officials and CEOs. The scientists presented the red lines proposal and discussed existential risks from artificial intelligence, and officials expressed enthusiasm about the consensus statement. Discussions focused on the necessity of international cooperation on this issue.
Statement
In the depths of the Cold War, international scientific and governmental coordination helped avert thermonuclear catastrophe. Humanity again needs to coordinate to avert a catastrophe that could arise from unprecedented technology.
在过去冷战最激烈的时候,国际科学界与政府间的合作帮助避免了热核灾难。面对前所未有的技术,人类需要再次合作以避免其可能带来的灾难的发生.
Signatories
Geoffrey Hinton
A.M. Turing Award recipient
Chief Scientific Advisor
University of Toronto Vector Institute
Andrew Yao
A.M. Turing Award recipient
Dean of Institute for Interdisciplinary Information Sciences
Tsinghua University
Distinguished Professor-At-Large
The Chinese University of Hong Kong
Professor of Center for Advanced Study
Tsinghua University
Yoshua Bengio
A.M. Turing Award recipient
Scientific Director and Founder
Montreal Institute for Learning Algorithms
Professor, Department of CS and Operations Research
Université de Montréal
Ya-Qin Zhang
Chair Professor of AI Science
Tsinghua University
Dean of Institute for AI Industry Research
Tsinghua University (AIR)
Former President of Baidu
Fu Ying
Beijing, China
Stuart Russell
Professor of EECS
UC Berkeley
Founder and Head
Center for Human-Compatible Artificial Intelligence
Director
Kavli Center for Ethics, Science, and the Public
Xue Lan
Dean
Schwarzman College at Tsinghua University
Director
Institute for AI International Governance
Gillian Hadfield
Schwartz Reisman Chair in Technology and Society
University of Toronto
AI2050 Senior Fellow
HongJiang Zhang
Founding Chairman
Beijing Academy of AI
Tiejun Huang
Chairman
Beijing Academy of AI
Zeng Yi
Professor, Director
Brain-inspired Cognitive Intelligence Lab, Chinese Academy of Sciences
Founding Director
Center for Long-term AI
Robert Trager
Director
Oxford Martin AI Governance Initiative
Senior Research Fellow
Blavatnik School of Government
International Governance Lead
Centre for the Governance of AI
Kwok-Yan Lam
Professor
School of Computer Science and Engineering, Nanyang Technological University, Singapore
Executive Director
Digital Trust Centre, Singapore
Dawn Song
Professor of EECS
UC Berkeley
Founder
Oasis Lab
Zhongyuan Wang
Director
Beijing Academy of AI
Dylan Hadfield-Menell
Bonnie and Marty (1964) Tenenbaum Career Development Assistant
Professor of EECS, MIT
Lead, Algorithmic Alignment Group Computer Science and Artificial
Intelligence Laboratory (CSAIL), MIT
AI2050 Early Career Fellow
Yaodong Yang
Assistant Professor
Institute for AI, Peking University
Head
PKU Alignment and Interaction Research Lab (PAIR)
Zhang Peng
CEO
Zhipu AI
Li Hang
Beijing, China
Tian Tian
CEO
RealAI
Edward Suning Tian
Founder and Chairman
China Broadband Capital Partners LP (CBC)
Chairman
AsiaInfo Group
Toby Ord
Senior Research Fellow
University of Oxford
Fynn Heide
Research Scholar
Centre for the Governance of AI
Adam Gleave
Founder and CEO
FAR Al
IDAIS-Oxford, 2023
In the Inaugural IDAIS, International Scientists Call for Global Action on AI Safety.
Ahead of the highly anticipated AI Safety Summit, leading AI scientists from the US, the PRC, the UK and other countries agreed on the importance of global cooperation and jointly called for research and policies to prevent unacceptable risks from advanced AI.
Prominent scientists gathered from the USA, the PRC, the UK, Europe, and Canada for the first International Dialogues on AI Safety. The meeting was convened by Turing Award winners Yoshua Bengio and Andrew Yao, UC Berkeley professor Stuart Russell, OBE, and founding Dean of the Tsinghua Institute for AI Industry Research Ya-Qin Zhang. The event took place at Ditchley Park near Oxford. Attendees worked to build a shared understanding of risks from advanced AI systems, inform intergovernmental processes, and lay the foundations for further cooperation to prevent worst-case outcomes from AI development.
Statement
Coordinated global action on AI safety research and governance is critical to prevent uncontrolled frontier AI development from posing unacceptable risks to humanity.”
在人工智能安全研究和治理方面协调一致的全球行动对于防止不受控制的前沿人工智能发展给人类带来不可接受的风险至关重要。
Signatories
Andrew Yao
Dean of Institute for Interdisciplinary Information Sciences
Tsinghua University
Distinguished Professor-At-Large
The Chinese University of Hong Kong
Professor of Center for Advanced Study
Tsinghua University
Turing Award Recipient
Yoshua Bengio
Scientific Director and Founder
Montreal Institute for Learning Algorithms
Professor, Department of CS and Operations Research
Université de Montréal
Turing Award Recipient
Stuart Russell
Professor of EECS
UC Berkeley
Founder and Head
Center for Human-Compatible Artificial Intelligence
Director
Kavli Center for Ethics, Science, and the Public
Ya-Qin Zhang
Chair Professor of AI Science
Tsinghua University
Dean of Institute for AI Industry Research
Tsinghua University (AIR)
Former President of Baidu
Ed Felten
Robert E. Kahn Professor of Computer Science and Public Affairs
Princeton University
Founding Director, Center for Information Technology Policy
Princeton University
Roger Grosse
Associate Professor of Computer Science
University of Toronto
Founding Member
Vector Institute
Gillian Hadfield
Schwartz Reisman Chair in Technology and Society
University of Toronto Faculty of Law
Director
Schwartz Reisman Institute for Technology and Society
AI2050 Senior Fellow
Sana Khareghani
Professor of Practice in AI
King’s College London
AI Policy Lead
Responsible AI UK
Former Head of UK Government Office for Artificial Intelligence
Dylan Hadfield-Menell
Bonnie and Marty (1964) Tenenbaum Career Development Assistant
Professor of EECS, MIT
Lead, Algorithmic Alignment Group Computer Science and Artificial
Intelligence Laboratory (CSAIL), MIT
Karine Perset
Research Scholar
Professor of EECS, MITCentre for the Governance of AI
Dawn Song
Professor of EECS
UC Berkeley
Founder
Oasis Lab
Xin Chen
PhD student
ETH Zurich
Max Tegmark
Professor
MIT Center for Brains, Minds & Machines
President and Co-founder
Future of Life Institute
Elizabeth Seger
Research Scholar
Centre for the Governance of AI
Yi Zeng
Professor and Director of Brain-inspired Cognitive Intelligence Lab
Institute of Automation, Chinese Academy of Sciences
Founding Director
Center for Long-term AI
HongJiang Zhang
Chairman
Beijing Academy of AI
Yang-Hui He
Fellow
London Institute
Adam Gleave
Founder and CEO
FAR Al
Fynn Heide
Research Scholar
Centre for the Governance of AI