Instagram to alert parents over teens’ repeated searches for self-harm content
text_fieldsInstagram will begin alerting parents if their teenagers repeatedly search for terms clearly associated with suicide or self-harm, as its parent company, Meta, faces two US trials over alleged harms to children.
The company announced on Thursday that the alerts will be sent to parents enrolled in Instagram’s parental supervision program. Notifications will be delivered by email, text, or WhatsApp, as well as through the parents’ Instagram account. Teens must be aged 13 to 17, and both parent and child must agree to supervision through an invitation on the platform. Only one parent can supervise an account.
Meta said it already blocks such content from appearing in teen search results and directs users to helplines instead. The new feature aims to notify parents if a teen’s searches suggest they may need support, while avoiding excessive alerts that could reduce their usefulness.
The move comes as Meta defends itself in a trial in Los Angeles examining whether its platforms deliberately addict and harm minors, and another in New Mexico over claims it failed to protect children from sexual exploitation.
Thousands of families, school districts, and government entities have sued Meta and other social media companies, alleging they design addictive platforms and fail to shield children from harmful content.
Meta chief executive Mark Zuckerberg has disputed claims that social media causes addiction, telling the Los Angeles court that scientific research has not proved mental health harms. Instagram head Adam Mosseri similarly denied that users could be clinically addicted.
Meta said it is also developing similar alerts tied to teens’ interactions with artificial intelligence related to suicide or self-harm. Advocacy group Fairplay criticised the move, saying the burden should not fall solely on parents.


















