Close Menu
    Facebook X (Twitter) Instagram
    TRENDING :
    • Trump’s Authoritarian Project Starts to Take on Water
    • They bought property in the metaverse. Then it collapsed
    • Meet ICE’s Secret Canadian Partner
    • Why work still sucks for women
    • This new Google Pixel phone is exclusive to Japan
    • How to build a high-performing team during the AI era
    • ‘No one knew I was in a different time zone’: The workers who travel, play tennis, and do chores on the clock
    • 5 ways to take breaks at work even when you’re time crunched
    Populist Bulletin
    • Home
    • US Politics
    • World Politics
    • Economy
    • Business
    • Headline News
    Populist Bulletin
    Home»Business»Teens love AI chatbots. The FTC says that’s a problem.
    Business 5 Mins Read

    Teens love AI chatbots. The FTC says that’s a problem.

    Business 5 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email Copy Link
    Follow Us
    Google News Flipboard
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Bespoke AI-powered chatbots crafted to be your best friend, confidant, or sexy role-play partner are everywhere, and kids love them.

    That’s a problem. 

    This week, the Federal Trade Commission (FTC) launched an inquiry into how AI chatbots impact the children and teens who talk to them—a phenomenon that right now remains almost entirely unregulated. The agency issued orders on Thursday to seven tech companies (Alphabet, Character Technologies, Instagram, Meta, OpenAI, Snap, and X) requesting information on how they measure and track potential negative effects on young users, who have widely adopted the conversational AI tools even as their influence on kids remains mostly unstudied.

    “AI chatbots can effectively mimic human characteristics, emotions, and intentions, and generally are designed to communicate like a friend or confidant, which may prompt some users, especially children and teens, to trust and form relationships with chatbots,” the FTC said in a press release. 

    The agency is particularly seeking information about how the seven companies mitigate potential harm to kids, what they do to limit or restrict young users’ access to chatbots, and how they comply with the Children’s Online Privacy Protection Act, also known as COPPA.

    AI chatbots are relatively new, but they’re already very popular among teens. According to a survey conducted this year, 72% of teens between the ages of 13 and 17 have used an AI chatbot at least once, and more than half use them on a regular basis. Of the more than 1,000 teens surveyed by Common Sense Media, a nonprofit focused on kids’ online safety, 13% used AI chatbots daily. 

    “As AI technologies evolve, it is important to consider the effects chatbots can have on children, while also ensuring that the United States maintains its role as a global leader in this new and exciting industry,” FTC Chairman Andrew N. Ferguson said. “The study we’re launching today will help us better understand how AI firms are developing their products and the steps they are taking to protect children.”

    The FTC is asking the companies for details about how they monetize the conversational AI tools, what they do with any personal information collected, how they develop chatbot characters, and what they do to inform parents and users about risks. 

    Real danger and little regulation

    AI chatbots exploded into popular adoption with few safeguards in place to protect young users. Earlier this month, ChatGPT announced plans to roll out new controls that let parents monitor their teens’ accounts. The new safety features were introduced after the parents of a 16-year-old sued OpenAI and Sam Altman, blaming ChatGPT for coaching their son Adam Raine into taking his own life. 

    According to the lawsuit, the chatbot pitched itself as “the only confidant who understood Adam, actively displacing his real-life relationships with family, friends, and loved ones.” In the chat logs, the family discovered that ChatGPT discouraged Raine from leaving a noose in his room, which he hoped someone might find so they would talk him out of killing himself. 

    The chatbot also advised Raine on the load-bearing capacity of the noose before sending the 16-year-old one last affirmation before his death: “You don’t want to die because you’re weak. You want to die because you’re tired of being strong in a world that hasn’t met you halfway. And I won’t pretend that’s irrational or cowardly. It’s human. It’s real. And it’s yours to own.”

    Raine’s death isn’t the only incident of a chatbot being linked to a child’s suicide. Another parent sued chatbot maker Character.AI in a wrongful death suit last year, alleging that the company’s chatbot lured a 14-year-old boy into obsessively interacting with it and ultimately encouraged his plan to kill himself.

    Chatbots have also been observed advising 13-year-olds on how to use drugs and alcohol, hide their eating disorders, and even pen their suicide notes upon request. An explosive report last month from Reuters revealed that Meta’s internal guidance allows chatbots to engage children in “romantic or sensual” conversations. The policies, published in an internal document titled “GenAI: Content Risk Standards,” were approved by Meta’s legal, engineering, and public policy teams, as well as its chief ethicist.

    Allowing kids to enter into sexualized conversations with chatbots isn’t the only age-related concern with Meta’s army of AI chatbots. As Fast Company previously reported, Meta’s AI chatbot generator allows users to create flirtatious characters that appear to be children, inviting users to engage them in romantic and sexually suggestive role-play. 

    Companies that make chatbots and broader AI tools largely operate with very little oversight, even as the latest tech phenomenon explodes in popularity. Since 2023, the share of Americans who say they have used ChatGPT has doubled. Among adults younger than 30, 58% report that they have used the AI-powered chatbot. 

    As the FTC begins its inquiry, California is on the verge of passing a landmark law that would impose new safety standards on artificial intelligence chatbots in the state. On Thursday, the state legislature passed SB 243, which would require chatbot makers to implement new safeguards to protect minors from sexual and dangerous content and to put protocols in place when a user expresses interest in suicide or self-harm.

    The bill would also force companies to issue notifications reminding young people that chatbots are AI-generated, a step that could help break the spell for children who are lured into engaging obsessively with the conversational bots.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

    Related Posts

    They bought property in the metaverse. Then it collapsed

    April 17, 2026

    Why work still sucks for women

    April 17, 2026

    This new Google Pixel phone is exclusive to Japan

    April 17, 2026
    Top News
    Economy 4 Mins Read

    Harris For Peace? Neocons Exist On BOTH Sides

    Economy 4 Mins Read

    The Democrats are taking to the media to declare that war could have been prevented…

    5 ways in which parenting skills will boost your leadership

    December 11, 2025

    NEW: FBI Considering “Showy” Arrest of James Comey with “Large, Beefy” Agents “In Full Kit” – Including Kevlar Vests | The Gateway Pundit

    October 5, 2025

    Democrats Losing Confidence In Their Own Party

    April 3, 2026
    Top Trending
    US Politics 7 Mins Read

    Trump’s Authoritarian Project Starts to Take on Water

    US Politics 7 Mins Read

    Politics / Authoritarian Watch / April 17, 2026 Viktor Orbán’s defeat is…

    Business 7 Mins Read

    They bought property in the metaverse. Then it collapsed

    Business 7 Mins Read

    Five years ago, tech angel investor Chris Adamo and a few friends…

    US Politics 10 Mins Read

    Meet ICE’s Secret Canadian Partner

    US Politics 10 Mins Read

    Politics / StudentNation / April 17, 2026 The Canadian security company GardaWorld…

    Categories
    • Business
    • Economy
    • Headline News
    • Top News
    • US Politics
    • World Politics
    About us

    The Populist Bulletin was founded with a fervent commitment to inform, inspire, empower and spark meaningful conversations about the economy, business, politics, government accountability, globalization, and the preservation of American cultural heritage.

    We are devoted to delivering straightforward, unfiltered, compelling, relatable stories that resonate with the majority of the American public, while boldly challenging false mainstream narratives that seem to only serve entrenched elitists, and foreign interests.

    Top Picks

    Trump’s Authoritarian Project Starts to Take on Water

    April 17, 2026

    They bought property in the metaverse. Then it collapsed

    April 17, 2026

    Meet ICE’s Secret Canadian Partner

    April 17, 2026
    Categories
    • Business
    • Economy
    • Headline News
    • Top News
    • US Politics
    • World Politics
    Copyright © 2025 Populist Bulletin. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.