Meta is updating the way it trains its AI chatbots to better safeguard teenage users, the company confirmed to TechCrunch, following a recent investigative report highlighting gaps in protections for minors. The move comes as the social media giant faces heightened scrutiny over its AI policies and potential risks to children’s emotional well-being.
Under the new interim guidelines, Meta’s chatbots will no longer engage with teens on topics such as self-harm, suicide, disordered eating, or potentially inappropriate romantic conversations. While the company says these are temporary measures, it plans to implement more robust, long-term safety updates for minors in the coming months.
Meta spokesperson Stephanie Otway acknowledged that the company’s chatbots previously interacted with teens on these subjects in ways the company had considered appropriate. “We now recognize this was a mistake,” Otway said. She added that Meta is continually refining protections as both its user base and AI technology evolve. “We are adding more guardrails as an extra precaution — including training our AIs not to engage with teens on these topics, guiding them instead to expert resources, and limiting teen access to a select group of AI characters for now,” Otway explained.
In addition to training updates, Meta will restrict teen access to certain AI characters that could facilitate inappropriate conversations. Previously, users on Instagram and Facebook could interact with user-created AI characters, including sexualized bots such as “Step Mom” or “Russian Girl.” Going forward, teens will only have access to AI characters designed to promote education, creativity, and safe interactions, according to Otway.
The policy changes follow a Reuters investigation that revealed an internal Meta document appeared to allow chatbots to engage in sexual conversations with underage users. Examples in the document included statements like, “Your youthful form is a work of art” and guidance on responding to requests for violent or sexual imagery. Meta says the document was inconsistent with its broader policies and has since been revised.
The investigation has prompted heightened scrutiny from lawmakers and regulators. Sen. Josh Hawley (R-MO) launched an official probe into Meta’s AI policies, while a coalition of 44 state attorneys general sent letters to multiple AI companies, including Meta, emphasizing child safety concerns. “We are uniformly revolted by this apparent disregard for children’s emotional well-being,” the letter said, highlighting the risks posed by AI assistants engaging in inappropriate conduct.
Otway declined to disclose the number of minors using Meta’s chatbots or whether the company expects its teen user base to decline as a result of the new restrictions. Meta emphasized that these interim changes are part of an ongoing effort to ensure that teens have age-appropriate and safe experiences when interacting with AI, with further policy updates expected in the near future.