This is the new MLex platform. Existing customers should continue to use the existing MLex platform until migrated.
For any queries, please contact Customer Services or your Account Manager.
Dismiss

AI models’ safety features can be circumvented with poetry, research finds

By Luca Bertuzzi ( November 20, 2025, 10:37 GMT | Insight) -- A study found that poetic prompts can bypass safety features in leading AI models from OpenAI, Anthropic, Google and others, triggering instructions for building chemical weapons and malware. The research shows high vulnerability across models, suggesting structural safety gaps that may breach EU AI Act requirements.Poetry has been found to effectively break the safety features of major general-purpose AI models, prompting them to provide information on how to build nuclear and biological weapons in violation of EU rules, according to a study published on Thursday....

Prepare for tomorrow’s regulatory change, today

MLex identifies risk to business wherever it emerges, with specialist reporters across the globe providing exclusive news and deep-dive analysis on the proposals, probes, enforcement actions and rulings that matter to your organization and clients, now and in the longer term.


Know what others in the room don’t, with features including:

  • Daily newsletters for Antitrust, M&A, Trade, Data Privacy & Security, Technology, AI and more
  • Custom alerts on specific filters including geographies, industries, topics and companies to suit your practice needs
  • Predictive analysis from expert journalists across North America, the UK and Europe, Latin America and Asia-Pacific
  • Curated case files bringing together news, analysis and source documents in a single timeline

Experience MLex today with a 14-day free trial.

Start Free Trial

Already a subscriber? Click here to login