This is the new MLex platform. Existing customers should continue to use the existing MLex platform until migrated.
For any queries, please contact Customer Services or your Account Manager.
Dismiss

AI giants are shying from safety pledges and pushing voluntary model testing

By Frank Hersey ( September 25, 2025, 13:17 GMT | Comment) -- AI developers aren't bound by any rulebooks to share their models for evaluation, but last year they did make very public voluntary commitments on model safety testing and transparency. Civil society worries that Google DeepMind is failing in those commitments through a novel way to release new models, while OpenAI and Anthropic are focused on building cosier relationships with testing bodies. The big picture is one of attempts to keep strict US and UK rules at bay.There may be no regulatory requirement to do so, but major AI developers OpenAI, Anthropic and Google DeepMind continue to share their models with national-level safety testing bodies in the US and UK....

Prepare for tomorrow’s regulatory change, today

MLex identifies risk to business wherever it emerges, with specialist reporters across the globe providing exclusive news and deep-dive analysis on the proposals, probes, enforcement actions and rulings that matter to your organization and clients, now and in the longer term.


Know what others in the room don’t, with features including:

  • Daily newsletters for Antitrust, M&A, Trade, Data Privacy & Security, Technology, AI and more
  • Custom alerts on specific filters including geographies, industries, topics and companies to suit your practice needs
  • Predictive analysis from expert journalists across North America, the UK and Europe, Latin America and Asia-Pacific
  • Curated case files bringing together news, analysis and source documents in a single timeline

Experience MLex today with a 14-day free trial.

Start Free Trial

Already a subscriber? Click here to login