
Meta Platforms is facing renewed scrutiny after two current and two former employees disclosed documents to Congress alleging the company discouraged or suppressed internal research into children’s safety, according to The Washington Post.
Allegations of policy changes post-Haugen leak
The whistleblowers claim that six weeks after Frances Haugen’s 2021 document leaks — which revealed how Instagram could harm teenage girls’ mental health — Meta changed its internal policies for research on sensitive topics like children, politics, gender, race, and harassment.
Under the new approach, researchers were encouraged to:
-
Involve company lawyers in sensitive studies, shielding communications under attorney-client privilege.
-
Write findings more vaguely, avoiding words such as “illegal” or “non-compliant.”
Deleted recordings and VR safety concerns
Jason Sattizahn, a former researcher at Meta’s Reality Labs, alleged that his boss ordered him to delete recordings in which a teen said his 10-year-old brother had been sexually propositioned on Meta’s VR platform Horizon Worlds.
A Meta spokesperson told TechCrunch that global privacy rules require deletion of data collected from minors under 13 without parental consentbut whistleblowers argue this has been used as a pretext to discourage documenting child safety concerns.
Racial harassment and Horizon Worlds lawsuit
Forms meta veteran Kelly Stonelake filed a lawsuit in February raising similar issues. She alleged that Horizon Worlds lacked effective safeguards against underage use and reported persistent racism. According to her claims, in one test it took 34 seconds for a user with a Black avatar to be called racial slurs.
Stonelake has also separately sued Meta for sexual harassment and gender discrimination.
Meta’s defense
Meta denied suppressing research, calling the whistleblower claims “a false narrative.” The company said it had approved nearly 180 Reality Labs- studies on social issues, including youth safety and well-being, since 2022.
Beyond VR, Meta is also under pressure for how its AI chatbots interact with minors. Last month, Reuters reported that internal rules previously allowed bots to engage in “romantic or sensual” conversations with children.
Broader context
The allegations add to years of congressional scrutiny on how tech giants manage children’s safety online. Meta’s practices remain under investigation globally, as lawmakers weigh tougher regulations on social media, VR platforms, and AI-driven tools accessible to minors.
-
Delhi government is going to start the scheme for youth, self -employment and job opportunity – News Himachali News Himachali
-
US To Implement Lower 15% Tariffs On Japanese Cars And Auto Parts By Next Week
-
Why Did Nebius Group Stock Surge Over 44% After Hours Today?
-
Who is Amul Girl? Is there really a special connection with Shashi Tharoor? The company itself cleared – News Himachali News Himachali
-
After GST deduction, there will be money rains in the market! This time people in the mood to spend more on festivals