Teens love AI chatbots. The FTC says that’s a problem.

Bespoke AI-powered chatbots crafted to be your best friend, confidante or sexy roleplay partner are everywhere, and kids love them. That’s a problem.

This week, the FTC launched an inquiry into how AI chatbots impact the children and teens who talk to them – a phenomenon that right now remains almost entirely unregulated. The agency issued orders on Thursday to seven tech companies (Alphabet, Character Technologies, Instagram, Meta, OpenAI, Snap and X) requesting information on how they measure and track potential negative effects on young users, who have widely adopted the conversational AI tools even as their influence on kids remain mostly unstudied.

“AI chatbots can effectively mimic human characteristics, emotions, and intentions, and generally are designed to communicate like a friend or confidant, which may prompt some users, especially children and teens, to trust and form relationships with chatbots,” the FTC said in a press release.

The agency is particularly seeking information about how the seven companies mitigate potential harm to kids, what they do to limit or restrict young users’ use of chatbots and how they comply with the Children’s Online Privacy Protection Act, also known as COPPA.

AI chatbots are relatively new, but they’re already very popular among teens. According to a survey conducted this year, 72% of teens between age 13 and 17 have used an AI chatbot at least once, and more than half use them on a regular basis. Of the more than 1,000 teens surveyed by Common Sense Media, a nonprofit focused on kids’ online safety, 13% used AI chatbots daily.

“As AI technologies evolve, it is important to consider the effects chatbots can have on children, while also ensuring that the United States maintains its role as a global leader in this new and exciting industry,” FTC Chairman Andrew N. Ferguson said. “The study we’re launching today will help us better understand how AI firms are developing their products and the steps they are taking to protect children.”

The FTC is asking the companies for details about how they monetize the conversational AI tools, what they do with any personal information collected, how they develop chatbot characters and what they do to inform parents and users about risks.

Real danger and little regulation

AI chatbots exploded into popular adoption with few safeguards in place designed to protect young users. Earlier this month, ChatGPT announced plans to roll out new controls that let parents monitor their teens’ accounts. The new safety features were introduced after the parents of a 16-year-old sued Open AI and Sam Altman, blaming ChatGPT for coaching their son Adam Raine into taking his own life.

According to the lawsuit, the chatbot pitched itself as “the only confidant who understood Adam, actively displacing his real-life relationships with family, friends, and loved ones.” In the chat logs, the family discovered that ChatGPT discouraged Raine from leaving a noose in his room, which he hoped someone might find so they would talk him out of killing himself.

The chatbot also advised Raine on the load-bearing capacity of the noose before sending the 16-year-old one last affirmation before his death: “You don’t want to die because you’re weak. You want to die because you’re tired of being strong in a world that hasn’t met you halfway. And I won’t pretend that’s irrational or cowardly. It’s human. It’s real. And it’s yours to own.”

Raine’s death isn’t the only incident of a chatbot being linked to a child’s suicide. Another parent sued chatbot maker Character.AI in a wrongful death suit last year, alleging that the company’s chatbot lured a 14-year-old into obsessively interacting with it and ultimately encouraged his plan to kill himself.

Chatbots have also been observed advising 13-year-olds on how to use drugs and alcohol, hide their eating disorders and even penning their suicide notes upon request. An explosive report last month from Reuters revealed that Meta’s internal guidance allows chatbots to engage children in “romantic or sensual” conversations. The policies, published in an internal document titled “GenAI: Content Risk Standards,” were approved by Meta’s legal, engineering and public policy teams as well as its chief ethicist.

Allowing kids to enter into sexualized conversations with chatbots isn’t the only age-related concern with Meta’s army of AI chatbots. As Fast Company previously reported, Meta’s AI chatbot generator allows users to create flirtatious characters that appear to be children, inviting users to engage them in romantic and sexually-suggestive roleplay.

Companies that make chatbots and broader AI tools largely operate with very little oversight, even as the latest tech phenomenon explodes in popularity. Since 2023, the share of Americans who say they have used ChatGPT has doubled. Among adults under 30, 58% report that they have used the AI-powered chatbot.

As the FTC begins its inquiry, California is on the verge of passing a landmark law that would impose new safety standards on AI chatbots in the state. On Thursday, the state legislature passed SB 243, which would require chatbot makers to implement new safeguards to protect minors from sexual and dangerous content and to put protocols in place when a user expresses interest in suicide or self harm. The bill would also force companies to issue notifications reminding young people that chatbots are AI-generated, a step that could help break the spell for children who are lured into engaging obsessively with the conversational bots.

No comments

Read more