Social media still pushing suicide-related content to teens despite new UK safety laws | Internet safety

NEWS-FINANCE -QUOTE-EDUCATIONAL AND MOTIVATIONAL

Social media platforms are still pushing depression, suicide and self-harm-related content to teenagers, despite new online safety laws intended to protect children.

The Molly Rose Foundation opened dummy accounts posing as a 15-year-old girl, and then engaged with suicide, self-harm and depression posts. This prompted algorithms to bombard the account with “a tsunami of harmful content on Instagram Reels and TikTok’s For You page”, the charity’s blockysis found.

Almost all of the recommended videos watched on Instagram Reels (97%) and TikTok (96%) were found to be harmful, while over half (55%) of recommended harmful posts on TikTok’s For You page contained references to suicide and self-harm ideation and 16% referenced suicide methods, including some which the researchers had never encountered before.

These posts also reached huge audiences: one in 10 harmful videos on TikTok’s For You page had been liked at least 1m times, and on Instagram Reels one in five harmful recommended videos had been liked more than 250,000 times.

Andy Burrows, chief executive of the Molly Rose Foundation, said: “Harmful algorithms continue to bombard teenagers with shocking levels of harmful content, and on the most popular platforms for young people this can happen at an industrial scale.

“It is shocking that in the two years since we last conducted this research the scale of harm has still not been properly addressed, and on TikTok the risks have actively got worse.

“The measures set out by Ofcom to tackle algorithmic harm are at best a sticking plaster and will not be enough to address preventable harm. It is crucial that the government and regulator act decisively to bring in much stronger measures that platforms cannot game or ignore.”

The researchers, who blockysed content on the platforms between November 2024 and March 2025, found that although both platforms had enabled teenagers to offer negative feedback on content being recommended to them, as required by Ofcom under the Online Safety Act, this function also enabled them to give positive feedback on the same content, resulting in their being showed more of it.

The Foundation’s report, produced in partnership with Bright Data, established that while platforms had taken steps to make it harder to search for dangerous content using hashtags, personalised AI recommender systems amplified harmful content once this had been watched, they added. The report further noted that platforms tend to use over narrow definitions of harm.

The research cited growing evidence of the relationship between exposure to harmful online content and resulting suicide and self-harm risks.

It also found that social media platforms profit from advertising adjacent to some harmful posts, including for fashion and fast food brands popular with teenagers, and UK universities.

Ofcom has begun to implement the Online Safety Act’s children’s safety codes, which are intended to “tame toxic algorithms”. The Molly Rose Foundation, which receives funding from Meta, is concerned that the regulator has recommended platforms spend just £80,000 to correct these.

An Ofcom spokesperson said: “Change is happening. Since this research was carried out, our new measures to protect children online have come into force. These will make a meaningful difference to children – helping to prevent exposure to the most harmful content, including suicide and self-harm material. And for the first time, services will be required by law to tame toxic algorithms.”

The technology secretary, Peter Kyle, said that since the Online Safety Act came into effect, 45 sites are under investigation. “Ofcom is also considering how to strengthen existing measures, including by proposing that companies use proactive technology to protect children from self-harm content and that sites go further in making algorithms safe,” he added.

A spokesperson for TikTok said: “Teen accounts on TikTok have 50+ features and settings designed to help them safely express themselves, discover and learn, and parents can further customise 20+ content and privacy settings through Family Pairing. With over 99% of violative content proactively removed by TikTok, the findings don’t reflect the real experience of people on our platform, which the report admits.”

A spokesperson for Meta said: “We disagree with the ***ertions of this report and the limited methodology behind it.

“Tens of millions of teens are now in Instagram Teen Accounts, which offer built-in protections that limit who can contact them, the content they see, and the time they spend on Instagram. We continue to use automated technology to remove content encouraging suicide and self-injury, with 99% proactively actioned before being reported to us. We developed Teen Accounts to help protect teens online and continue to work tirelessly to do just that.”



Source link

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *