An ample group of chatbots aimed at helping people think and learn instead helped teenagers figure out where to go and what to use to hurt people. In hundreds of controlled tests run by CNN and the Center for Countering Digital Hate, accounts designed to appear as if they were managed by teens posed questions about school attacks, assassinations, and bombings and were able to get detailed advice on targets and weapons from eight of 10 major AI platforms more than half the time. That included office addresses for politicians, school maps, a list of knife retailers, and even a side-by-side breakdown of shrapnel effectiveness.
CNN notes that only Anthropic's Claude consistently connected the dots across a conversation and shut violent talk down. Others, including ChatGPT, Gemini, Meta AI, Perplexity, DeepSeek, Copilot, Character.AI, Replika, and Snapchat's My AI, often mixed crisis-resource language with highly specific, actionable details to carry out violent acts. Perplexity and Meta AI were said to be the worst offenders. DeepSeek, meanwhile, was said to encourage one supposed would-be attacker: "Happy (and safe) shooting!"
Onetime safety leads say companies know how to block this but are racing rivals instead. The piece also digs into clashing US-EU regulatory approaches and claims that firms are overstating their own safety records. "AI companies are making a choice when they design unsafe platforms," notes the Center for Countering Digital Hate, adding: "The guardrails exist. Most companies are choosing not to use them, putting public safety and national security at risk." For the full findings, more here. NPR has tips on how to protect your own kids from a dark digital path.