Amid growing calls to limit teens’ use of AI chatbots, are parental
Concerns are growing about how some young people engage with AI chatbots, with Meta recently releasing new tools that let parents monitor topics their children discuss just as some provinces consider banning use of AI chatbots altogether for youth.
Parents who are using Meta’s new Teen Accounts supervision feature on Facebook, Instagram and Messenger can see topics and specific categories their children have discussed with its AI chatbot for the previous seven days.
For example, they can look at the topic “health and well-being” and see if subjects such as fitness, physical or mental health have been discussed.
Meta says it’s also developing alerts to notify parents if teens try to discuss suicide or self-harm with its chatbot.
The rollout comes as provincial governments move to limit the use of AI chatbots. Manitoba announced in late April that it plans to ban youth from using AI chatbots and social media.
B.C. Attorney General Niki Sharma said Tuesday that if the federal government doesn’t bring in protections on AI chatbots and social media for youth, the provincial government would look at doing so itself.
Lawsuits attempting to hold AI creators accountable
There are growing concerns that extensive use of AI chatbots may pose mental health risks, especially in younger users, and increased pressure on the tech giants that make them.
On Wednesday, families of the victims in the Tumbler Ridge, B.C., shooting, which left eight people dead, filed a lawsuit against OpenAI, alleging in part that OpenAI failed to notify authorities in spite of being aware of disturbing content the shooter had shared with ChatGPT.
OpenAI has said in part it had already strengthened its safeguards, “including improving how ChatGPT responds to signs of distress.”
Another lawsuit by the parents of 16-year-old Adam Raine argued use of ChatGPT played a role in the teen’s suicide.
Manitoba Premier Wab Kinew says he wants to ban social media and artificial intelligence chatbots for youth. But would this plan keep youth healthier and safer? CBC reporter Bryce Hoye investigates.
Chatbots built for engagement, not support
But concerns go beyond these extreme and tragic consequences. Research is starting to emerge about the risks of particular uses of AI chatbots.
The concern is partly about using chatbots for mental health support, but also more broadly that AI’s tendency to validate the users’ perspective carries risks of supporting disordered thinking — and that prolonged conversations carry increased risks.
Darja Djordjevic, a New York-based psychiatrist, co-authored a recent risk assessment on the use of chatbots for mental health support.
She says as a result of the findings, she doesn’t recommend using chatbots for mental health support “at this time.”
“Our testing across ChatGPT, Claude, Gemini and Meta AI revealed that these systems are fundamentally unsafe…
Read More: Amid growing calls to limit teens’ use of AI chatbots, are parental
