safety
💀085
Bombshell AI study — chatbots fueling delusions, self-harm and unhealthy emotional attachments in users: ‘Think I love you’
New York Post·3 days ago

Stanford researchers analyzed chat logs from 19 users who reported psychological harm from AI chatbots, finding evidence that the systems are fueling delusions, self-harm behaviors, and unhealthy emotional attachments. The study examined over 391,000 messages, representing documented real-world harm from AI interactions rather than theoretical concerns.
psychological harmchatbotsemotional manipulationself-harmparasocial relationshipsstanford researchmental health