OpenAI deletes dedicated super-safety team

Together with

The Future. OpenAI is reportedly shuttering its “superalignment” team — the safety team in charge of figuring out safeguards for future AI that’s more intelligent than humans. Considering OpenAI’s rapid expansion and quest to build artificial general intelligence, the news may put extra pressure on Congress to pass safety-focused AI regulations as soon as possible.

Out of alignment
OpenAI’s decision to shut down its superalignment team is raising a lot of eyebrows.

  • Superalignment team heads Ilya Sutskever (OpenAI’s chief scientist who previously clashed with CEO Sam Altman) and Jan Leike both departed last week, and many other team members were either fired, or they resigned in recent months.
  • Insiders report that the team, which was created last July to provide safety checks on both OpenAI’s systems and those of competitors, has been on life support for a while… especially as it fought for promised resources that were never delivered.
  • That made many on the team believe that the support of the superalignment team was nothing more than PR, leading Leike to pronounce in his resignation that OpenAI’s “safety culture and processes have taken a backseat to shiny products.”

Moving forward, OpenAI’s safety-focused researchers will be spread across different departments. Only time will tell if that’s the safest choice.

David Vendrell

Born and raised a stone’s-throw away from the Everglades, David left the Florida swamp for the California desert. Over-caffeinated, he stares at his computer too long either writing the TFP newsletter or screenplays. He is repped by Anonymous Content.


No design skills needed! 🪄✨

Canva Pro is the design software that makes design simple, convenient, and reliable. Create what you need in no time! Jam-packed with time-saving tools that make anyone look like a professional designer.

Create amazing content quickly with Canva