The United States and United Kingdom agreed to work together to monitor advanced AI models for safety risks. The two countries will collaborate on research and do at least one joint safety test.

Both countries say safety is a top concern when it comes to using AI models. US President Joe Biden’s executive order on AI required companies developing AI systems to report safety test results. Meanwhile, UK Prime Minister Rishi Sunak announced the creation of the UK AI Safety Institute, saying that companies like Google, Meta, and OpenAI must allow the vetting of their tools.

The agreement between both countries’ AI Safety Institutes takes effect immediately

US Commerce Secretary Gina Raimondo said the government is “committed to developing similar partnerships with other countries to promote AI safety across the globe.” 

“This partnership is going to accelerate both of our institutes’ work across the full spectrum of risks, whether to our national security or to our broader society,” Raimondo said in a statement. 

Through the agreement, the two countries promise to collaborate on technical research, explore personnel exchanges, and share information. 

One potential partner for both the US and the UK is the European Union, which passed its own sweeping regulations for the use of AI systems. EU’s AI law, which will not come into effect for several years, requires that companies running powerful AI models follows safety standards.

The UK’s AI Safety Institute was established right before a global AI summit in November, where several world leaders, including US Vice President Kamala Harris, discussed how to harness and possibly regulate the technology across borders. 

The UK has begun safety testing some models, although it is unclear if it has access to more recently released versions. Several AI companies have urged more clarity from the UK’s AI Safety Institute on timelines and next steps if models are found to be risky. 

Share.
Exit mobile version