Musk Pushes to Remove OpenAI Sam Altman in Lawsuit
The tech entrepreneur originally filed suit in 2024, alleging he was misled into contributing $38 million during OpenAI’s founding in 2015, based on assurances it would operate as a nonprofit entity. Since then, the organization—now valued at $852 billion—has undergone structural changes, transitioning into a hybrid model in which a nonprofit entity retains a 26% stake in its for-profit division, including its flagship product, ChatGPT.
In the latest amended complaint, Musk’s legal team is seeking to “strip Sam Altman and Greg Brockman of their positions of authority and the personal financial benefits they extracted from OpenAI’s illicit for-profit operations and conversion,” escalating demands beyond financial restitution.
The filing also calls for both divisions of OpenAI to uphold earlier commitments to “safety-first AI development and open research for the broad benefit of humanity,” emphasizing the original mission Musk claims has been abandoned. Any damages awarded, the filing notes, would be directed to OpenAI’s nonprofit arm. A trial is scheduled to begin later this month.
OpenAI has rejected the allegations, describing them as “wholly unfounded allegations,” and has reportedly suggested Musk is working alongside Meta CEO Mark Zuckerberg in an effort to weaken the company’s competitive position.
Musk departed OpenAI in 2018 following internal disagreements with Altman. He later acquired Twitter—now rebranded as X—in 2022, before launching a rival artificial intelligence venture, xAI, in 2023.
Earlier this year, both xAI and OpenAI disclosed agreements with the Pentagon to incorporate their AI technologies into classified U.S. defense systems. Altman stated that OpenAI’s participation was contingent on restrictions preventing the use of its tools for mass surveillance or fully autonomous weapons.
Those same safeguards, however, have been a sticking point in the Pentagon’s dispute with Anthropic, a previous AI partner. The U.S. Department of War labeled Anthropic a national security risk after the company declined to relax safety controls on its Claude model.
On Wednesday, Anthropic said its latest AI system is “extremely autonomous,” capable of reasoning at the level of a seasoned security researcher, and too advanced for public deployment, as the firm continues its legal confrontation with the Pentagon.
Legal Disclaimer:
EIN Presswire provides this news content "as is" without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the author above.