OpenAI Alignment Expert Quits, Claims Company Putting Profits Over Safety

single

In a move that sent ripples through the AI community yesterday, Jan Leike, co-lead of OpenAI's "superalignment" team, dramatically quit his position with a pointed accusation: the company is concealing crucial safety concerns from the public.

The resignation—announced via a blunt tweet—claimed the organization "is not currently aligned with pursuing the mission of safe AGI that benefits humanity." Pretty damning words coming from the guy specifically hired to ensure artificial general intelligence doesn't, you know, destroy us all.

There's something almost poetically ironic about an alignment researcher declaring his own company misaligned. It's like a marriage counselor filing for divorce, citing "irreconcilable differences."

I've been tracking OpenAI's evolution since its founding days, and what we're witnessing here isn't just corporate drama. It's the inevitable collision between scientific caution and commercial ambition playing out in real time.

Remember when OpenAI was a non-profit? Those were the days. Founded with lofty ideals about ensuring artificial general intelligence would benefit all humanity, the organization has since morphed into a capped-profit company valued at over $80 billion, with Microsoft as a major investor. Money changes things. Always has.

The tension here isn't surprising to anyone who's spent time around Silicon Valley boardrooms (and I've sat through my share of them). Companies apply what I think of as a "future risk discount rate" to potential dangers—they'll gladly accept theoretical tomorrow problems for concrete today profits. It's human nature scaled to corporate proportions.

Look, Leike wasn't some random engineer. His entire job was figuring out how humans might control AI systems smarter than ourselves. When your superintelligence control expert walks out because he thinks you're being reckless... well, that's not exactly reassuring.

OpenAI CEO Sam Altman responded predictably, offering the corporate equivalent of "thoughts and prayers" while his company continues shipping products at breakneck speed. The market rewards velocity, not caution.

And who can blame him? With competitors like Anthropic and various Chinese firms breathing down their necks, slowing down for safety reasons might feel existentially threatening to the business itself. Oh, the irony.

I spoke with several AI researchers yesterday (off the record, naturally) who expressed concern but not surprise. "The commercial pressures were always going to win," one told me while nursing a cold brew at a café in Palo Alto. "The question was never if, but when."

What makes all of this particularly frustrating is that nobody—not even the researchers designing these systems—fully understands how the most advanced AI models actually work. We're building increasingly complex black boxes and hoping for the best.

Will this resignation change anything? Probably not. The incentives remain aligned with growth and capability, not caution.

But maybe—just maybe—it'll cause a few investors or board members to ask tougher questions. Though I wouldn't bet my increasingly worthless journalist's salary on it.

In the meantime, OpenAI will continue to "take these concerns very seriously" right up until... well, whenever. The show must go on, existential risk be damned.