Don’t expect mature AI standards until tech is 'more disruptive,' UK institute says
By Frank Hersey ( March 18, 2025, 18:04 GMT | Insight) -- Mature standards on the applications for AI currently deployed should not be expected when developers are looking ahead to far more disruptive technologies, said the director of the UK AI Security Institute today. And as there is no regulation requiring model developers to share their pre-deployment models for testing by bodies such as the UK AISI, its director said they’re doing it as they want insight into any dangers in their software.The pace of development in artificial intelligence will prove problematic for devising standards as developers are looking ahead to potentially more risky future versions of what is already available, the director of the UK AI Security Institute told a standards summit today. Cooperation so far by developers to submit their models for testing is based on their motivation to have problems highlighted, said Oliver Ilott, as the body still has no regulatory powers to compel such submissions....
Prepare for tomorrow’s regulatory change, today
MLex identifies risk to business wherever it emerges, with specialist reporters across the globe providing exclusive news and deep-dive analysis on the proposals, probes, enforcement actions and rulings that matter to your organization and clients, now and in the longer term.
Know what others in the room don’t, with features including:
- Daily newsletters for Antitrust, M&A, Trade, Data Privacy & Security, Technology, AI and more
- Custom alerts on specific filters including geographies, industries, topics and companies to suit your practice needs
- Predictive analysis from expert journalists across North America, the UK and Europe, Latin America and Asia-Pacific
- Curated case files bringing together news, analysis and source documents in a single timeline
Experience MLex today with a 14-day free trial.