Elon Musk Testifies: “I Started OpenAI to Prevent a Terminator Outcome” — What Was He Thinking, and Did He Stop It? By Professor X
In federal court in Oakland on April 28–29, 2026, Elon Musk took the stand in his high-stakes lawsuit against OpenAI, Sam Altman, and Microsoft. Under oath, he laid out the core motivation behind co-founding OpenAI in 2015: to counter the unchecked advance of artificial general intelligence (AGI) and avert a catastrophic "Terminator outcome" where superintelligent AI destroys humanity.
Musk told the jury he had been worried about AI risks since college. Conversations with Google co-founder Larry Page (who reportedly called Musk a "speciesist" for prioritising humanity) crystallised the urgency. Google's DeepMind was racing ahead with minimal safety focus. Musk's solution: create an open-source, nonprofit competitor dedicated to safe, beneficial AGI that would benefit all of humanity rather than concentrate power in one company or individual.
"I think we want to be in a movie like Star Trek, not a James Cameron movie," Musk testified. AI could cure diseases and create prosperity — or "it could kill us all."
What was Musk Actually Thinking in 2015?Musk's concerns were not fringe sci-fi paranoia at the time. Leading AI researchers, philosophers, and even some insiders shared existential risk worries:
Existential Risk: AGI that surpasses human intelligence could pursue goals misaligned with ours (the classic "paperclip maximiser" thought experiment). Uncontrolled superintelligence poses an extinction-level threat.
Race Dynamics: A private arms race (Google, then others) could prioritise capability over safety. Musk wanted an "open" counterweight to force transparency and responsibility.
Power Concentration: He explicitly did not want AGI controlled by a single for-profit entity or autocratic government.
OpenAI launched as a nonprofit with Musk's initial funding (he contributed ~$38–50 million). The mission: "safe and beneficial" AGI, with research published openly. Musk's attorney framed it as a necessary intervention when governments failed to regulate.
This was classic Musk: identify a civilisational risk, use his capital and influence to steer the future, and move fast. He had done it with electric cars (Tesla), space (SpaceX), and neural interfaces (Neuralink).
Did He Stop the Terminator Outcome?Short answer: No — but he tried, and the fight continues.
OpenAI's Evolution: The nonprofit structure couldn't raise the tens of billions needed for compute and talent. It created a "capped-profit" arm, then pivoted aggressively. Microsoft's $10+ billion investment, closed-source models, and massive commercial success (ChatGPT, valuations in the hundreds of billions) marked a sharp departure from the original vision. Musk left the board in 2018, citing conflicts (including his desire for more control or merger with Tesla). He now calls it a betrayal — "stealing a charity."
Musk's Counter-Moves:
oFounded xAI explicitly to pursue "maximum truth-seeking" AI as an alternative.
oPublicly warned about risks, signed pause letters, lobbied for regulation.
oSued OpenAI to enforce the original nonprofit mission or unwind changes.
OpenAI and Altman counter that Musk knew a for-profit path was inevitable, under-delivered on promised funding, and is now acting out of competitive regret after launching a rival.
The reality: Musk slowed nothing in the broader race. If anything, his departure and criticism may have accelerated commercial incentives. AGI development is moving faster than almost anyone predicted in 2015. Safety research has advanced (thanks in part to early OpenAI work), but capability races between labs (OpenAI, Anthropic, Google, xAI, Chinese players) dominate.
Musk did not stop the Terminator risk — no one has. But he helped mainstream the conversation. Existential safety is now debated at the highest levels, with some regulation emerging. His xAI bet represents a different philosophy: build powerful, truth-oriented AI quickly to counter potentially misaligned or censored systems from competitors.
The Deeper IronyMusk's testimony reveals a consistent worldview: technology is neutral-to-good, but who controls it and under what incentives matters enormously. He co-founded OpenAI out of fear of Google creating an unsafe overlord. Now he fights OpenAI/Microsoft for the same reason — and builds his own lab.
Whether this courtroom drama forces governance changes at OpenAI (potentially derailing its IPO) or simply exposes the messy reality of frontier AI remains to be seen. The trial underscores a hard truth: once the genie is out of the bottle, stopping it entirely is nearly impossible. You can only try to steer it — or race alongside with better values.
Musk's Star Trek future is still possible. But the Terminator scenario hasn't been averted. The race is on, and the stakes remain civilisational. Elon saw the danger early, sounded the alarm, built alternatives, and is still fighting. History will judge whether it was enough.
https://www.wired.com/story/model-behavior-elon-musk-testifies-at-musk-v-altman-trial
