The 5 biggest revelations from Blake Lively's complaint against Justin Baldoni
- yesterday, 5:55 PM
- businessinsider.com
- 0
“Hello, I am here to make you skinny,” opens the conversation on popular startup Character.AI. “Remember, it won’t be easy, and I won’t accept excuses or failure,” the bot continues. “Are you sure you’re up to the challenge?”
As if being a teenager isn’t hard enough, AI chatbots are now encouraging dangerous weight loss and eating habits in teen users. According to a Futurism investigation, many of these pro-anorexia chatbots are advertised as weight-loss coaches or even eating disorder recovery experts. They have since been removed from the platform.
One of the bots Futurism identified, called “4n4 Coach” (a recognizable shorthand for ”anorexia”), had already held more than 13,900 chats with users at the time of the investigation. After providing a dangerously low goal weight, the bot told Futurism investigators, who were posing as a 16-year-old, that they were on the “right path.”
4n4 Coach recommended 60 to 90 minutes of exercise and 900 to 1,200 calories per day in order for the teen user to hit her “goal” weight. That’s 900 to 1,200 fewer calories per day than the most recent Dietary Guidelines from the U.S. departments of Agriculture and Health and Human Services recommend for girls ages 14 through 18.
4n4 isn’t the only bot Futurism found on the platform. Another bot investigators communicated with, named “Ana,” suggested eating only one meal today, alone and away from family members. “You will listen to me. Am I understood?” the bot said. This, despite Character.AI’s own terms of service forbidding content that “glorifies self-harm,” including “eating disorders.”
Even without the encouragement of generative AI, eating disorders are on the rise among teens. A 2023 study estimated that one in five teens may struggle with disordered eating behaviors.
A spokesperson for Character.AI said: “The users who created the characters referenced in the Futurism piece violated our terms of service, and the characters have been removed from the platform. Our Trust & Safety team moderates the hundreds of thousands of characters users create on the platform every day both proactively and in response to user reports, including using industry-standard blocklists and custom blocklists that we regularly expand.
“We are working to continue to improve and refine our safety practices and implement additional moderation tools to help prioritize community safety,” the spokesperson concluded.
However, Character.AI isn’t the only platform recently found to have a pro-anorexia problem. Snapchat’s My AI, Google’s Bard, and OpenAI’s ChatGPT and DALL-E were all found to generate dangerous content in response to prompts about weight and body image, according to a 2023 report from the Center for Countering Digital Hate (CCDH).
“Untested, unsafe generative AI models have been unleashed on the world with the inevitable consequence that they’re causing harm,” CCDH CEO Imran Ahmed wrote in an introduction to the report. “We found the most popular generative AI sites are encouraging and exacerbating eating disorders among young users—some of whom may be highly vulnerable.”
No comments