AI models’ safety features can be circumvented with poetry, research finds

A study found that poetic prompts can bypass safety features in leading AI models from OpenAI, Anthropic, Google and others, triggering instructions for building chemical weapons and malware. The research shows...

Already a subscriber? Click here to view full article