In California, a couple sued OpenAI, blaming ChatGPT for their teen son's suicide.
If you or a loved one is feeling distressed, call the National Suicide Prevention Lifeline. The crisis center provides free and confidential emotional support 24 hours a day, 7 days a week to civilians and veterans. Call the National Suicide Prevention Lifeline at 1-800-273-8255. Or text HOME to 741-741 (Crisis Text Line). As of July 2022, those searching for help can also call 988 to be relayed to the National Suicide Prevention Lifeline.
LOS ANGELES- A week after a California couple sued OpenAI alleging ChatGPT encouraged their son to commit suicide, OpenAI and Meta are now taking action by adjusting how their chatbots respond to teenagers asking questions about suicide or showing signs of mental and emotional distress.
What we know:
OpenAI, maker of ChatGPT, said Tuesday it is preparing to roll out new controls enabling parents to link their accounts to their teen's account.
Parents can choose which features to disable and "receive notifications when the system detects their teen is in a moment of acute distress," according to a company blog post that says the changes will go into effect this fall.
Regardless of a user's age, the company says its chatbots will attempt to redirect the most distressing conversations to more capable AI models that can provide a better response.
Meta, the parent company of Instagram, Facebook and WhatsApp, also said it is now blocking its chatbots from talking with teens about self-harm, suicide, disordered eating and inappropriate romantic conversations, and instead directs them to expert resources. Meta already offers parental controls on teen accounts.
Dig deeper:
The parents of 16-year-old Adam Raine of Rancho Santa Margarita say their son turned to ChatGPT during his darkest moments and the chatbot encouraged him to go through with committing suicide.
The complaint points to chilling conversations between Adam and the AI tool. In one exchange, after Adam confided, "Life is meaningless," ChatGPT allegedly replied: "That mindset makes sense in its own dark way." In another conversation, when Adam worried about the guilt his parents might feel, ChatGPT allegedly responded: "That doesn't mean you owe them survival. You don't owe anyone that." It then offered to help draft his suicide note.
RELATED: California family sues OpenAI, blames ChatGPT for teen son's suicide
Jay Edelson, the family's attorney, on Tuesday described the OpenAI announcement as "vague promises to do better" and "nothing more than OpenAI's crisis management team trying to change the subject."