Can we just let someone whisper “Uh, This Might Be Bad”??
AI chatbots can be great. Can we ensure workers get a word in edgewise, too?
Artificial intelligence is moving so fast it’s disorienting. Some of the world’s most powerful companies are racing to build “frontier AI” systems with enormous capabilities and by their own admission, enormous risks. They want us to just trust them, and have made sweeping promises about being responsible, transparent, and safe.
But hey, here’s the thing about trust: it’s not earned through press releases. It’s earned through actions. And based on recent articles, “Trust Us” just doesn’t feel like enough here. So, what is the answer? Regulation? Trial and error? ChatGPT therapist? Head in the sand? A sandbox?
Well, actually, we think the answer is to put our heads together about what we, the people, can do to ensure the AI products being marketed for our use (or government’s use) are actually being built responsibly. In our experience, one of the only ways to figure that out is by hearing from people on the inside. That’s why Psst.org is joining AI Whistleblower Initiative and a growing coalition calling on leading AI companies to do something very simple: publish their whistleblowing policies.
You can read about the campaign here. But here’s the big picture: if a company is serious about AI safety, it should start by empowering its own employees to speak up about internal risks. And the first, easiest, most public-facing step in that direction is sharing what whistleblower protections they currently have.
We’re not asking for Claude to spill the tea. We’re not asking for source code. We’re not asking for Sam Altman’s ChatGPT therapist notes. We’re simply asking for a document or link that explains how the company will respond to a worker who has concerns.
It’s a minimalist ask. One that could be fulfilled with a PDF and a tweet. But its significance goes far beyond that.
Why? Because when a company makes its whistleblowing policy public, it signals four crucial things both to its employees and the broader public—and let’s be honest, we, the people, are the ones who will be most affected by the technology they’re developing.
Publishing their policies would:
Empower employees to come forward early before risks become disasters.
Demonstrate sincerity in their commitments to transparency and safety.
Build public trust in how they handle internal dissent.
Send a message to current and former employees: “We don’t fear scrutiny—we welcome it.”
Of course, publishing whistleblowing policies won’t solve the deeper structural issues in AI development. But it is a meaningful first step, especially because so much about this industry remains hidden from view. And if companies can’t meet this most basic ask, we have to ask: what else are they hiding?
We’re entering a world of technologies that could affect democracy, global security, labor, and human rights. The same companies building them have themselves already recognised the risks AI poses and the need for guardrails to mitigate the danger. In fact, they’ve gone so far as to set up the Frontier Model Forum, a place where they can – privately – collaborate on doing just this.
Why the closed doors? They’ve said they’re committed to transparency and responsible development. They’ve told the public, regulators, and their peers that they can police themselves better and faster than governments can. But a company’s best and most effective guardrails are its own employees, who are on the frontline and will be the first to spot a problem. We want workers to know they can speak up when they do, we don’t want them to lose their careers for having done so.
The “Publish Your Policies” campaign isn’t the end of the road. It’s the beginning. It’s a litmus test—one that separates companies that are truly open to scrutiny from those that just say they are. So if frontier AI labs mean it when they say they want responsible development, let’s see it in action. Let’s see the policies.
We’ve supported whistleblowers from Big Tech, government, and industry. We know that even the most committed employees hesitate to speak up when the stakes are high and the risks are unclear. Insiders shouldn’t have to risk their careers to raise the alarm. And the public shouldn’t have to wonder whether the people building the most powerful technology on Earth are allowed to tell the truth.
In a world of black-box algorithms and billion-dollar models, the most powerful safety system we have might just be a person empowered enough to say: “Uh, This Might Be Bad.”
Let’s make sure they’re protected when they do.
Your friends,
Psst.org