Starting next week, Instagram will notify parents to check on their teen searching for terms related to self-harm or suicide. Meta says a similar alert system for its AI chatbots is coming later this year.
The new Instagram feature sends parents an alert when their child “repeatedly tries to search for terms clearly associated with suicide or self-harm within a short period of time.” It’s rolling out in the US, UK, Australia, and Canada starting next week, but it’s only for parents and teens who opt-in to supervision. It’s expected to expand to other regions later this year.
“The vast majority of teens do not try to search for suicide and self-harm content on Instagram, and when they do, our policy is to block these searches, instead directing them to resources and helplines that can offer support,” Instagram said in the announcement. “Our goal is to empower parents to step in if their teen’s searches suggest they may need support. We also want to avoid sending these notifications unnecessarily, which, if done too much, could make the notifications less useful overall.”
The parental alerts will be sent via email, text, or WhatsApp — depending on the contact information available — alongside in-app notifications that provide optional resources around how to approach discussing sensitive topics with their child.






