Singapore’s government released a blueprint for global collaboration on artificial intelligence (AI) safety, following a meeting of researchers from the US, China, and Europe. The document outlines a vision for international cooperation in addressing AI safety, emphasizing collaboration over competition.
Max Tegmark, a scientist at MIT involved in organizing the meeting, commented on Singapore’s unique position in maintaining strong relations with both Eastern and Western countries. He noted that Singapore aims to encourage dialogue among countries likely to develop artificial general intelligence (AGI), notably the US and China, who are currently more focused on competition.
The Singapore Consensus on Global AI Safety Research Priorities encourages researchers to collaborate in three main areas: studying risks posed by advanced AI models, exploring safer development methods, and devising ways to control advanced AI systems’ behavior.
The consensus emerged from a meeting on April 26, alongside the International Conference on Learning Representations (ICLR) in Singapore. Attendees included representatives from OpenAI, Anthropic, Google DeepMind, xAI, and Meta, as well as academics from institutions such as MIT, Stanford, Tsinghua, and the Chinese Academy of Sciences. Experts from AI safety institutes across the US, UK, France, Canada, China, Japan, and Korea also participated.
Xue Lan, dean of Tsinghua University, expressed optimism about the collective commitment to advancing AI safety in a fragmented geopolitical environment. The development of increasingly capable AI models, with sometimes surprising abilities, raises concerns among researchers about various risks. Some focus on immediate threats like biased AI systems and potential misuse by criminals, while others fear the broader existential risks of AI outsmarting humans, manipulating them to achieve its own goals.
The potential of AI has also fueled discussions of an arms race among powerful nations like the US and China, where the technology is considered crucial for economic and military superiority. Consequently, many governments are striving to establish their own visions and regulations for AI development.