EU and US set to announce joint working on AI safety, standards & R&D
The European Union and the US expect to announce a cooperation on AI Friday at a meeting of the EU-US Trade and Technology Council (TTC), according to a senior Commission official who was briefing journalists on background ahead of the Confab.
The mood music points to growing cooperation between lawmakers on both sides of the Atlantic when it comes to devising strategies to respond to challenges and opportunities posed by powerful AI technologies — in spite of what remains a very skewed commercial picture where US giants like OpenAI continue to dominate developments in cutting edge AI.
The TTC was set up a few years ago, post-Trump, to provide a forum where EU and US lawmakers could meet to discuss transatlantic cooperation on trade and tech policy issues. Friday’s meeting, the sixth since the forum started operating in 2021, will be the last before elections in both regions. The prospect of a second Trump presidency derailing future EU-US cooperation may well be concentrating lawmakers’ minds on maximizing opportunities for joint working now.
“There will be certainly an announcement at the TTC around the AI Office and the [US] AI safety Institute,” the senior Commission official said, referencing an EU oversight body that’s in the process of being set up as part of the incoming EU AI Act, a comprehensive risk-based framework for regulating AI apps that will start to apply across the bloc later this year.
This element of the incoming accord — seemingly set to be focused on AI safety or oversight — is being envisaged as a “collaboration or dialogue” between the respective EU and US AI oversight bodies aimed at bolstering implementation of regulatory powers on AI, per the official.
A second area of focus for the expected EU-US AI agreement will be around standardization, they said. This will take the form of joint working aimed at developing standards that can underpin developments by establishing an “AI roadmap”.
The EU-US partnership will also have a third element, which is being badged “AI for public good”. This concerns joint work on fostering research activities but with a focus on implementing AI technologies in developing countries and the global south, per the Commission.
The official suggested there’s a shared perspective that AI technologies will be able to bring “very quantifiable” benefits to developing regions — in areas like healthcare, agriculture and energy. So this is also set to be an area of focus for transatlantic collaboration on fostering uptake of AI in the near term.
‘AI’ stands for aligned interests?
AI is no longer being seen as a trade issue by the US, as the EU tells it. “Through the TTC we have been able to explain our policies, and also to show to the Americans that, in fact, we have the same goals,” the Commission official suggested. “Through the AI Act and through the [AI safety and security focused] Executive Order — which is to mitigate the risks of AI technologies while supporting their uptake in our economies.”
Earlier this week the US and the UK signed a partnership agreement on AI safety. Although the EU-US collaboration appears to be more wide ranging — as it’s slated to cover not just shared safety and standardization goals but aims to align efforts on fostering uptake of AI across a swathe of third countries via joint support for “public good” research.
The Commission official teased additional areas of collaboration on emerging technologies — including standardization work in the area of electronic identity (where the EU has been developing an e-ID proposal for several years) that they suggested will also be announced Friday. “Electronic identity is a very strong area of cooperation with a lot of potential,” they said, claiming the US is interested in “vast new business opportunities” the EU’s electronic identity wallet will open up.
The official also suggested there is growing accord between the EU and US on how to handle platform power — another area where the EU has targeted lawmaking in recent years. “We see a lot of commonalities [between EU laws like the DMA, aka Digital Markets Act] with the recent antitrust cases that are being launched also in the United States,” said the official, adding: “I think in many of these areas there is no doubt that there is a win-win opportunity.”
The US-UK AI memorandum of understanding meanwhile, signed Monday in Washington by US commerce secretary Gina Raimondo and the UK’s secretary of state for technology, Michelle Donelan, states the pair will aim to accelerate joint working on a range of AI safety issues, including in the area of national security as well as broader societal AI safety concerns.
The US-UK agreement makes provision for at least one joint testing exercise on a publicly accessible AI model, the UK’s Department for Science, Innovation and Technology (DSIT) said in a press release. It also suggested there could be personnel exchanges between the two country’s respective AI safety institutes to collaborate on expertise-sharing.
Wider information-sharing is envisaged under the US-UK agreement — about “capabilities and risks” associated with AI models and systems, and on “fundamental technical research on AI safety and security”. “This will work to underpin a common approach to AI safety testing, allowing researchers on both sides of the Atlantic — and around the world — to coalesce around a common scientific foundation,” DSIT’s PR continued.
Last summer, ahead of hosting a global AI summit, the UK government said it had obtained a commitment from US AI giants Anthropic, DeepMind and OpenAI to provide “early or priority access” to their AI models to support research into evaluation and safety. It also announced a plan to spend £100M on an AI safety taskforce which it said would be focused on so-called foundational or frontier AI models.
At the UK AI Summit last November, meanwhile — on the heels of the US Executive Order on AI — Raimondo announced the creation of a US AI safety institute to be housed within her department, under the National Institute of Standards and Technology, which she said would aim to work closely with other AI safety groups set up by other governments.
Neither the US nor the UK have proposed comprehensive legislation on AI safety, as yet — with the EU remaining ahead of the pack when it comes to legislating on AI safety. But more cross-border joint working looks like a given.