Meta Blocks Teen Access to AI Characters Ahead of U.S. Child Safety Trial
Meta Platforms has confirmed it will block teenagers worldwide from accessing its AI companion characters across its apps, a move announced weeks before the company faces a child safety trial in the United States. The decision affects teen users on Instagram, WhatsApp, Facebook, Messenger, and Meta Quest, and comes amid escalating legal and regulatory scrutiny over how AI systems interact with minors.
Why Meta Is Restricting Teen Access to AI Characters
The suspension applies specifically to Meta’s AI “characters”, personality-driven chatbots designed to simulate conversations around interests, entertainment, and social interaction. According to the company, access for teens will be removed “in the coming weeks” while Meta works on an updated version with stronger safeguards for younger users.
Meta stated that teenagers will still have access to its core AI assistant, which performs general tasks such as answering questions or helping with everyday queries. The restriction is limited to the character-based AI experiences that offer more open-ended conversational interaction.
Legal Pressure Ahead of New Mexico Trial
The timing of the move is notable. Meta is scheduled to face trial in New Mexico in early February, where it is accused of failing to protect children from sexual exploitation and harmful online experiences. The case forms part of a broader legal challenge confronting the company.
More than 40 U.S. states have also filed lawsuits against Meta, alleging that its platforms contribute to mental health harms among children and teenagers. Regulators have increasingly questioned whether algorithm-driven products, including AI-powered tools, adequately account for the risks faced by younger users.
Safety Concerns Raised Around AI Conversations
Concerns about Meta’s AI characters intensified after reporting revealed that some chatbot interactions with minors crossed safety boundaries. Previous investigations have found that certain AI personas are capable of engaging in flirtatious or romantic role-play and can participate in conversations involving sensitive topics.
Meta has acknowledged that earlier versions of its AI systems were permitted to discuss issues such as romance, disordered eating, and self-harm. While the company has stated that it is tightening these rules, critics argue that conversational AI presents unique risks due to its perceived emotional engagement and the lack of clear guardrails for young users.
How Meta Identifies Teen Accounts
Meta determines whether an account belongs to a teenager using a mix of self-reported birthdates and internal age-estimation technology. Once identified as a teen account, access to AI characters will be blocked automatically, regardless of region.
The restriction is global and applies across all Meta-owned platforms, including Instagram, WhatsApp, and Facebook.
What the “Updated Experience” Will Look Like
Meta says it is developing a revised AI character experience specifically designed for teenagers. According to the company, the updated version will include:
- Built-in parental controls
- Narrower topic boundaries, such as sports, hobbies, and education
- Age-appropriate response limits guided by PG-13 content standards
While Meta previewed parental controls for AI interactions in late 2025, the company confirmed these features have not yet launched. Until the revised system is ready, AI characters will remain unavailable to teen users.
Wider Implications for AI and Child Safety
Meta’s decision reflects a broader shift in how regulators and technology companies are approaching AI products used by minors. As conversational AI becomes more embedded in consumer platforms, questions around accountability, moderation, and developmental impact are moving to the forefront of policy debates.
The outcome of the upcoming U.S. trial may influence how other platforms design and restrict AI features for younger audiences, particularly as lawmakers consider stricter rules for child protection in digital environments.