Meta is finally held accountable. With AI, we might not have as much time.
It took 20 years and a generation of irreversible harm to hold social media giants accountable. AI could be exponentially worse—with its suicide companions, deepfake child sexual abuse material, and algorithmic manipulation at scale. And this time, the hourglass is already running out. The sand is falling.
But let’s take a moment to acknowledge what just happened. This past week felt like a reckoning. Meta was found liable in not one but two landmark cases involving harm to children. A New Mexico jury ordered Meta to pay $375m in damages after finding the company guilty of enabling child sexual exploitation and misleading users about the mental health effects of its platforms, in violation of state consumer protection law. The following day, a California jury ordered Meta and YouTube to pay $6m in a case brought by a young woman known as K.G.M., who began using social media at age 6 and alleged that algorithmic design choices caused her body dysmorphia and thoughts of self-harm, as well as acute anxiety and depression.
These were the first cases of their kind. They will not be the last. More than 2,000 plaintiffs, including families, school districts, and state attorneys general, have filed coordinated lawsuits against Meta, YouTube, TikTok, and Snap, the Guardian reports.
Both cases turned on the companies’ own internal documents: emails where employees joked about being pushers, and internal research showing executives knew exactly what their products were doing to children. Meta and YouTube’s lawyers had a hard time refuting evidence their own clients had produced.
None of this would have been possible without whistleblowers. Meta employees had been sounding the alarm for years—warning internally that Instagram worsened body image issues in teen girls, that algorithms repeatedly surfaced eating disorder and self-harm content to minors. In 2021, Frances Haugen sounded the alarm with internal documents showing Meta deliberately made its products addictive. In 2025, two whistleblowers from Meta’s VR branch testified before the Senate Judiciary Committee about children being exposed to sexual adult content in VR environments while the company hid the evidence. Whistleblowers didn’t just expose wrongdoing. They made legal accountability possible.
But what happens when whistleblower channels become narrower due to mounting cuts in tech firms? Rumors of major Meta layoffs have circulated for weeks. On Wednesday, the company cut 700 employees from Reality Labs, with more cuts expected across the organization. This isn’t incidental. The more job insecurity becomes a feature of the industry, the less likely workers are to come forward. Fewer whistleblowers means less accountability — for social media, and for AI.
So was this a victory or just a ripple? The verdicts are serious, but let’s be clear-eyed: we’ve known about these harms for years. What Meta was held accountable for is not abstract — it created a digital marketplace that enabled child sex trafficking and engineered algorithms that gave young girls eating disorders and suicidal ideation... knowingly. The juries spoke for all of us.
And yet: the same week Meta faced those charges, it announced a stock program worth up to $921 million each for six top executives over five years. For a company generating billions in quarterly revenue, a $375 million fine is a cost of doing business. Not a consequence. The executives at the top will continue to generate millions while sacrificing the employees in the lowest tiers. This is the accountability gap. And it’s about to get much wider.
The comparison to Big Tobacco is instructive, but incomplete. What actually changed tobacco culture wasn’t just regulation — it was regulation that made the harm impossible to ignore. Graphic warning labels on cigarette packages worked not because they informed, but because they triggered visceral discomfort. What would the equivalent look like for social media? Imagine opening Instagram to a mandatory disclaimer: “Our products have been found liable for enabling child sexual exploitation and causing serious mental health harm.” Would you scroll the same way? Would you hand your child the phone?
Regulation works best not as a sledgehammer, but as a tool to reshape how we collectively perceive what we’re using… and what we’re tolerating. Civil society has the power to hold power accountable. The question is whether we’ll move fast enough.
The stakes couldn’t be higher. We are living under an administration that appointed Mark Zuckerberg as an AI advisor the same day a jury found him liable for knowingly exposing children to sexual exploitation. Big tech and government are consolidating power in ways that feel inevitable.
But they are not inevitable. The public has leverage. We need stronger whistleblower protections and regulators willing to match punishment to harm. Civil society has held power accountable before. The question is whether we’ll move fast enough this time—before AI makes the damage irreversible.
—
If you work inside a tech or AI company and you’ve seen something that concerns you, you don’t have to navigate this alone. Psst.org offers pro bono legal advice to help you figure out what you have, protect what you know, and decide when and how to safely disclose. You can deposit information securely and confidentially, and we’ll help you understand your options.
Psst.org is a nonprofit that helps tech workers disclose public-interest information safely. If you’ve seen something at work that concerns you, request a pro bono consultation with one of our lawyers via our Safe. All conversations are confidential.






