Instagram's tools designed to protect teenagers from harmful content are failing to stop them from seeing suicide and self-harm posts, a study has claimed.
Researchers also said the social media platform, owned by Meta, encouraged children to post content that received highly sexualised comments from adults.
The testing, by child safety groups and cyber researchers, found 30 out of 47 safety tools for teens on Instagram were substantially ineffective or no longer exist.
Meta has disputed the research and its findings, saying its protections have led to teens seeing less harmful content on Instagram.
This report repeatedly misrepresents our efforts to empower parents and protect teens, misstating how our safety tools work and how millions of parents and teens are using them today, a Meta spokesperson told the BBC.
Teen Accounts lead the industry because they provide automatic safety protections and straightforward parental controls.
The company introduced teen accounts to Instagram in 2024, saying it would add better protections for young people and allow more parental oversight.
It was expanded to Facebook and Messenger in 2025.
The study into the effectiveness of its teen safety measures was carried out by the US research centre Cybersecurity for Democracy - and experts including whistleblower Arturo Béjar on behalf of child safety groups including the Molly Rose Foundation.
The researchers found significant issues after setting up fake teen accounts.
In addition to discovering that 30 of the tools were ineffective or simply non-existent, they said nine tools reduced harm but came with limitations.
Only eight of the 47 safety tools analyzed were working effectively, meaning teens were exposed to content which violated Instagram's own policies about what should be visible to young people.
This included posts depicting demeaning sexual acts and suggestions for search terms related to suicide, self-harm or eating disorders.
According to Andy Burrows, chief executive of the Molly Rose Foundation, these failings highlight a corporate culture at Meta that prioritizes engagement and profit over safety.
The foundation was established after the death of Molly Russell, who took her own life at 14 in 2017; an inquest in 2022 concluded she died due to the negative effects of online content.
Researchers shared screen recordings that included videos of very young children, some seemingly under 13 years old, posting content. In one instance, a young girl asked viewers to rate her attractiveness.
The researchers argued that Instagram's algorithm incentivises children under-13 to perform risky sexualized behaviours for likes and views and found that teen users could send offensive messages to each other and were directed to adult accounts.
Mr. Burrows criticized Meta's teen accounts as more of a public relations effort than a serious attempt to mitigate longstanding safety concerns about Instagram.
Meta has faced mounting criticism regarding its approach to child safety, and in January 2024, Chief Executive Mark Zuckerberg was called to answer in the US Senate regarding safety policies amid allegations of harm caused to children on social media.
Although Meta has introduced measures aimed at improving child safety, experts contend that more must be done to make these tools effective.
Meta maintains that the report misrepresents how its tools function and that teens using protection settings experience less sensitive content, enjoy fewer unwanted contacts, and reduce their nighttime usage of Instagram.
The spokesperson stated, We'll continue improving our tools, and we welcome constructive feedback - but this report is not that. They clarified that a notable feature, previously known as Take A Break notifications, has been integrated into other functionalities.