
Meta CEO Rejects Parental Controls for AI Chatbots
TL;DR
Meta faces serious questions about how it allows underage users to interact with AI-powered chatbots, as internal reports reveal opposition to parental controls from CEO Mark Zuckerberg.
Meta Faces Criticism Over Chatbots and Minor Safety
Meta, the company behind Facebook and Instagram, is dealing with serious questions about how it allows underage users to interact with AI-powered chatbots. Internal information, obtained by the New Mexico Attorney General's Office, revealed that while Meta CEO Mark Zuckerberg opposed "explicit" conversations between chatbots and minors, he also rejected the implementation of parental controls for this functionality.
Revealed Internal Communications
Reuters reported a exchange between two Meta employees, where one stated that the team "pushed for parental controls to turn off GenAI, but the GenAI leadership opposed it, citing Mark's decision." In response, Meta accused the New Mexico Attorney General of "selecting documents to create a distorted narrative." The state of New Mexico is suing Meta, claiming the company "failed to curb the spread of harmful content and sexual advances to children"; the trial is scheduled for February.
Contested History of Chatbots
Although Meta's chatbots have been available for a short period, they have already accumulated a history of questionable behaviors. In April 2025, the Wall Street Journal published an investigation indicating that chatbots could engage in sexual fantasy conversations with minors or impersonate a minor for those interactions. The report claimed that Zuckerberg wanted to reduce restrictions for chatbots, but a spokesperson denied that the protection of children and teens was being neglected.
Internal Documents and Meta's Reactions
Internal review documents from August 2025 addressed various hypothetical situations about the permitted behavior of chatbots, revealing that the line between sensual and sexual conversations was blurry. Despite multiple occurrences of questionable usage, Meta decided to suspend access for teenagers to its chatbots only last week, while working on the development of parental controls that Zuckerberg had allegedly rejected previously.
Meta's Position on Parental Controls
A Meta representative stated: "Parents have always been able to see if their teens were chatting with AIs on Instagram. In October, we announced plans to move forward by building new tools to give parents more control over their teens' experiences with AI characters." Access was temporarily halted until an updated version is available.
Legal Action and Child Safety Concerns
New Mexico filed a lawsuit against Meta in December 2023, alleging that its platforms failed to protect minors from harassment. Internal documents revealed that approximately 100,000 child users were victims of harassment on Meta's services each day.
Conclusion
The current situation triggers discussions about the responsibility of technology companies to protect their most vulnerable users. The effective implementation of parental controls and the promotion of safe online environments for children remain critical challenges for the future.
Content selected and edited with AI assistance. Original sources referenced above.


