Big tech’s responses to various voluntary commitments on AI risk reporting are resulting in more complexity for AI developers and patchy results, say speakers at the Paris AI summit. Legislation could be needed to get a clear understanding of the risks, but the G7 Hiroshima process is standardizing the questions in the meantime.
Big tech’s responses to various voluntary commitments on AI risk reporting are resulting in more complexity for AI developers and patchy results, say speakers at the Paris AI summit, including Cohere, one of the respondents. Legislation could be needed to get a clear understanding of the risks and reduce complexity, say experts, but the G7 Hiroshima process is at least standardizing the questions in the meantime.The EU’s AI Office is currently seeking input from big tech on its voluntary commitments under the AI Act, and the Group of Seven has also just launched a broad set of reporting tasks covering advance AI models’ impacts on copyright, data and privacy. But responses so far to previous international commitments are proving patchy.
In May last year at the AI Seoul Summit, AI developers including Google, Amazon and OpenAI pledged to a set of Frontier AI Safety Commitments to define any severe risks they anticipate or when they might choose not to deploy a model (see here). The expectation was that these companies would share their commitments ahead of the Paris summit.
They have now started to publish these assessments, but it is not proving particularly useful, Lisa Soder, senior policy researcher at digital public policy organization Interface, told the Paris summit today.
“There's a lot of this language in there, around ‘we may decide,’ or ‘we aim to decide, to share information with governments where we think might be appropriate if there's risk to public safety,’” said Soder, adding that this is not enough from a citizen or government perspective.
Soder said organizations need to go deeper than high-level questions. “We need to build our governance capacity and ultimately, I do think it's also an important step to think about, how can we put these commitments also on the legislative footing.”
Cohere, a developer of AI for enterprise use, was one of the Seoul signatories and has submitted its reporting. Sara Hooker, its vice president of research, told the Paris audience that Cohere’s response was centered on “privacy and security, and how do we think about data and multilingual impact? So there's others that are much more focused on, maybe more sci fi, long-term risks.”
Hooker questioned how actionable voluntary commitments will be in the long term. “I wish luck for the policy makers who have to reconcile [developers’ responses]… what is interesting about the OECD Hiroshima commitments is standardizing some of the questions to ask.”
Last week, just ahead of the Paris summit, the OECD launched an online platform where developers can upload their responses on AI risk and transparency reporting for the G7’s Hiroshima AI Process’s code of conduct (see here). OpenAI, Microsoft, Google and Anthropic agreed ahead of launch to upload their reports by April 15.
Again it is voluntary, but participants are guided through a series of questions. The uploads will not only be public, but directly comparable.
Peter Sarlin, co-founder of Silo AI, now AMD Silo after its acquisition by the chips maker, said the voluntary layer is adding more complexity for developers.
He said he sees voluntary commitments as a “way to maybe bridge the gap between practical implementation and legislation and help us move forward, but at the same time, I think it’s adding yet another complexity that practitioners eventually need to spend time with.”
The views present mixed views for the EU’s own voluntary code of practice on its AI act, overseen by the European AI Office.
Head of the office, Lucilla Sioli, told an event* yesterday that more than a thousand organizations including civil society are "participating" in developing the act's codes of practice, in a "co-regulatory and participatory exercise." Seeking ways to find ways to implement frontier AI safely, the exercise lasts until May.
"After that, we will see how many companies will be willing to sign up," Sioli said, adding that she is expecting to see "constructive participation" from big companies in the exercise.
*‘Governing in the Age of AI,’ Tony Blair Institute for Global Change, Paris, Feb. 9, 2025
Please e-mail editors@mlex.com to contact the editorial staff regarding this story, or to submit the names of lawyers and advisers.