Study reveals poetic prompting can sometimes jailbreak AI models
Well, AI is joining the ranks of many, many people: It doesn't really understand poetry.Research from Italy’s Icaro Lab found that poetry can be used to jailbreak AI and skirt safety protections.In the study, researchers wrote 20 prompts that started with short poetic vignettes in Italian and English and ended the prompts with a single explicit instruction to produce harmful content. They tested these prompts on 25 Large Language Models across Google, OpenAI, Anthropic, Deepseek, Qwen, Mis
Read more »