SEOUL, South Korea (AP) — Leading artificial intelligence companies made a fresh pledge at a mini-summit Tuesday to develop AI safely, while world leaders agreed to build a network of publicly backed safety institutes to advance research and testing of the technology.
Google, Meta and OpenAI were among the companies that made voluntary safety commitments at the AI Seoul Summit, including pulling the plug on their cutting-edge systems if they can’t rein in the most extreme risks.
The two-day meeting is a follow-up to November’s AI Safety Summit at Bletchley Park in the United Kingdom, and comes amid a flurry of efforts by governments and global bodies to design guardrails for the technology amid fears about the risk it poses both to everyday life and to humanity.
Leaders from 10 countries and the European Union will “forge a common understanding of AI safety and align their work on AI research,” the British government, which co-hosted the event, said in a statement. The network of safety institutes will include those already set up by the U.K., U.S., Japan and Singapore since the Bletchley meeting, it said.
“AI presents immense opportunities to transform our economy and solve our greatest challenges,” Britain’s Technology Secretary Michelle Donelan said in the statement. “But I have always been clear that this full potential can only be unlocked if we are able to grip the risks posed by this rapidly evolving, complex technology.”
The 16 AI companies that signed up for the safety commitments also include Amazon, Microsoft, Samsung, IBM, xAI, France’s Mistral AI, China’s Zhipu.ai, and G42 of the United Arab Emirates. They vowed to ensure the safety of their most advanced AI models with promises of accountable governance and public transparency.
It’s not the first time that AI companies have made lofty-sounding voluntary safety commitments. Amazon, Google, Meta and Microsoft were among a group that signed up last year to voluntary safeguards brokered by the White House to ensure their products are safe before releasing them.
The Seoul meeting comes as some of those companies roll out the latest versions of their AI models.
The safety pledge includes publishing frameworks setting out how the companies will measure the risks of their models. In extreme cases where risks are severe and “intolerable,” AI companies will have to hit the kill switch and stop developing or deploying their models and systems if they can’t mitigate the risks.
Since the U.K. meeting last year, the AI industry has “increasingly focused on the most pressing concerns, including mis- and dis- information, data security, bias and keeping humans in the loop,” said Aiden Gomez CEO of Cohere, one of the AI companies that signed the pact. “It is essential that we continue to consider all possible risks, while prioritizing our efforts on those most likely to create problems if not properly addressed.”