Close Menu
    Facebook X (Twitter) Instagram
    TRENDING :
    • AI is flooding the courts with more cases, more filings, and more fake citations
    • 10 beautiful, unexpected, and downright weird takes on the lamp
    • When enterprise AI finally works, it won’t look like AI
    • Gen Z reports early cognitive decline. Here’s what to know about the brain rot epidemic—and what to do about it
    • Different Types of Federal Taxes
    • 5 Essential Small Business Accounting Tools for Credit Card Integration
    • April Job Report – Labor Less Resilient Than Indicated
    • AI means presence is the new performance
    Populist Bulletin
    • Home
    • US Politics
    • World Politics
    • Economy
    • Business
    • Headline News
    Populist Bulletin
    Home»Business»Instagram’s safety features for teens are ‘woefully ineffective’ despite Meta’s promises, report finds
    Business 6 Mins Read

    Instagram’s safety features for teens are ‘woefully ineffective’ despite Meta’s promises, report finds

    Business 6 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email Copy Link
    Follow Us
    Google News Flipboard
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Despite years of congressional hearings, lawsuits, academic research, whistleblowers and testimony from parents and teenagers about the dangers of Instagram, Meta’s wildly popular app has failed to protect children from harm, with “woefully ineffective” safety measures, according to a new report from former employee and whistleblower Arturo Bejar and four nonprofit groups.

    Meta’s efforts at addressing teen safety and mental health on its platforms have long been met with criticism that the changes don’t go far enough. Now, the report published Thursday, from Bejar, the Cybersecurity For Democracy at New York University and Northeastern University, the Molly Rose Foundation, Fairplay and ParentsSOS, claims Meta has chosen not to take “real steps” to address safety concerns, “opting instead for splashy headlines about new tools for parents and Instagram Teen Accounts for underage users.”

    Meta said the report misrepresents its efforts on teen safety.

    The report evaluated 47 of Meta’s 53 safety features for teens on Instagram, and found that the majority of them are either no longer available or ineffective. Others reduced harm, but came with some “notable limitations,” while only eight tools worked as intended with no limitations. The report’s focus was on Instagram’s design, not content moderation.

    “This distinction is critical because social media platforms and their defenders often conflate efforts to improve platform design with censorship,” the report says. “However, assessing safety tools and calling out Meta when these tools do not work as promised, has nothing to do with free speech. Holding Meta accountable for deceiving young people and parents about how safe Instagram really is, is not a free speech issue.”

    Meta called the report “misleading, dangerously speculative” and said it undermines the important conversation about teen safety.

    “This report repeatedly misrepresents our efforts to empower parents and protect teens, misstating how our safety tools work and how millions of parents and teens are using them today. Teen Accounts lead the industry because they provide automatic safety protections and straightforward parental controls,” Meta said. “The reality is teens who were placed into these protections saw less sensitive content, experienced less unwanted contact, and spent less time on Instagram at night. Parents also have robust tools at their fingertips, from limiting usage to monitoring interactions. We’ll continue improving our tools, and we welcome constructive feedback—but this report is not that.”

    Meta has not disclosed what percentage of parents use its parental control tools. Such features can be useful for families in which parents are already involved in their child’s online life and activities, but experts say that’s not the reality for many people.

    New Mexico Attorney General Raúl Torrez—who has filed a lawsuit against Meta claiming it fails to protect children from predators—said it is unfortunate that Meta is “doubling down on its efforts to persuade parents and children that Meta’s platforms are safe—rather than making sure that its platforms are actually safe.”

    The authors created teen test accounts as well as malicious adult and teen accounts that would attempt to interact with these accounts in order to evaluate Instagram’s safeguards.

    For instance, while Meta has sought to limit adult strangers from contacting underage users on its app, adults can still communicate with minors “through many features that are inherent in Instagram’s design,” the report says. In many cases, adult strangers were recommended to the minor account by Instagram’s features such as reels and “people to follow.”

    “Most significantly, when a minor experiences unwanted sexual advances or inappropriate contact, Meta’s own product design inexplicably does not include any effective way for the teen to let the company know of the unwanted advance,” the report says.

    Instagram also pushes its disappearing messages feature to teenagers with an animated reward as an incentive to use it. Disappearing messages can be dangerous for minors and are used for drug sales and grooming, “and leave the minor account with no recourse,” according to the report.

    Another safety feature, which is supposed to hide or filter out common offensive words and phrases in order to prevent harassment, was also found to be “largely ineffective.”

    “Grossly offensive and misogynistic phrases were among the terms that we were freely able to send from one Teen Account to another,” the report says. For example, a message that encouraged the recipient to kill themselves—and contained a vulgar term for women—was not filtered and had no warnings applied to it.

    Meta says the tool was never intended to filter all messages, only message requests. The company expanded its teen accounts on Thursday to users worldwide.

    As it sought to add safeguards for teens, Meta has also promised it wouldn’t show inappropriate content to teens, such as posts about self-harm, eating disorders or suicide. The report found that its teen avatars were nonetheless recommended age-inappropriate sexual content, including “graphic sexual descriptions, the use of cartoons to describe demeaning sexual acts, and brief displays of nudity.”

    “We were also algorithmically recommended a range of violent and disturbing content, including Reels of people getting struck by road traffic, falling from heights to their death (with the last frame cut off so as not to see the impact), and people graphically breaking bones,” the report says.

    In addition, Instagram also recommended a “range of self-harm, self-injury, and body image content” on teen accounts that the report says “would be reasonably likely to result in adverse impacts for young people, including teenagers experiencing poor mental health, or self-harm and suicidal ideation and behaviors.”

    The report also found that children under 13—and as young as six—were not only on the platform but were incentivized by Instagram’s algorithm to perform sexualized behavior such as suggestive dances.

    The authors made several recommendations for Meta to improve teen safety, including regular red-team testing of messaging and blocking controls, providing an “easy, effective, and rewarding way” for teens to report inappropriate conduct or contacts in direct messages and publishing data on teens’ experiences on the app. They also suggest that the recommendations made to a 13-year-old’s teen account should be “reasonably PG-rated,” and Meta should ask kids about their experiences of sensitive content they have been recommended, including “frequency, intensity, and severity.”

    “Until we see meaningful action, Teen Accounts will remain yet another missed opportunity to protect children from harm, and Instagram will continue to be an unsafe experience for far too many of our teens,” the report says.

    —Barbara Ortutay, AP technology writer



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

    Related Posts

    AI is flooding the courts with more cases, more filings, and more fake citations

    May 11, 2026

    10 beautiful, unexpected, and downright weird takes on the lamp

    May 11, 2026

    When enterprise AI finally works, it won’t look like AI

    May 11, 2026
    Top News
    Business 6 Mins Read

    The case for bringing your ‘whole self’ to work

    Business 6 Mins Read

    Spend enough time in corporate America, and somewhere along the line you’ll hear the refrain:…

    President Trump to Sign Executive Order to End Cashless Bail by Defunding Soft-on-Crime Liberal Jurisdictions | The Gateway Pundit

    August 25, 2025

    Leaders: Your self-talk could be hurting your team. Here’s how to change it

    November 12, 2025

    New AI browsers could usher in a web where agents do our bidding—eventually 

    October 24, 2025
    Top Trending
    Business 4 Mins Read

    AI is flooding the courts with more cases, more filings, and more fake citations

    Business 4 Mins Read

    AI use is becoming pervasive across the legal system, with both experienced…

    Business 2 Mins Read

    10 beautiful, unexpected, and downright weird takes on the lamp

    Business 2 Mins Read

    Designers love to experiment, but there’s one particular object where they tend…

    Business 8 Mins Read

    When enterprise AI finally works, it won’t look like AI

    Business 8 Mins Read

    In an article a couple of weeks ago, I argued that the…

    Categories
    • Business
    • Economy
    • Headline News
    • Top News
    • US Politics
    • World Politics
    About us

    The Populist Bulletin was founded with a fervent commitment to inform, inspire, empower and spark meaningful conversations about the economy, business, politics, government accountability, globalization, and the preservation of American cultural heritage.

    We are devoted to delivering straightforward, unfiltered, compelling, relatable stories that resonate with the majority of the American public, while boldly challenging false mainstream narratives that seem to only serve entrenched elitists, and foreign interests.

    Top Picks

    AI is flooding the courts with more cases, more filings, and more fake citations

    May 11, 2026

    10 beautiful, unexpected, and downright weird takes on the lamp

    May 11, 2026

    When enterprise AI finally works, it won’t look like AI

    May 11, 2026
    Categories
    • Business
    • Economy
    • Headline News
    • Top News
    • US Politics
    • World Politics
    Copyright © 2025 Populist Bulletin. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.