How OpenAI went from an altruistic nonprofit to typical Big Tech startup
Internal tensions over the company prioritizing money over safety might be symptoms of a bigger issue


The story of OpenAI's meteoric rise in the artificial intelligence space took an unexpected turn over a tumultuous weekend, ending with co-founder Sam Altman's sudden ouster. His departure, made possible by the company's unique governance structure, illuminated an internal struggle between the company's nonprofit roots and the push for more commercialization.
The board has been relatively vague about the decision to fire him, stating that Altman was "not consistently candid in his communications with the board,” in the announcement on Friday. Though employees and investors rallied behind Altman to get him back in, the board ultimately hired former Twitch CEO Emmett Shear as interim CEO instead. Greg Brockman left his position as OpenAI's president in solidarity, and hundreds of employees threatened to leave the company if the board did not reinstate Altman and Brockman and resign. Hours after the board confirmed Altman would not return, Microsoft, a major investor in OpenAI, announced that it would hire Altman and Brockman to head its new advanced AI research lab.
While it's unclear exactly why Altman was fired, some say the chaotic turn of events is a microcosm of a larger debate over whether to prioritize safety over commercialization in artificial intelligence.
Subscribe to The Week
Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.

Sign up for The Week's Free Newsletters
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
Proof that OpenAI is no different from other Big Tech companies
At first, the founders set OpenAI up as a "true not-for-profit with the goal of advancing the introduction of a safe AGI," James Ball explained in his newsletter TechTris. While it originally had "no intention of focusing on the profit motive or on hefty returns from venture capitalists," that model did not last long. Still, at its core, OpenAI was an "attempt to build a big tech startup in which the founder/CEO didn’t wield unassailable power," Ball noted. The company is attempting to change the world by "building safe and revolutionary artificial intelligence models and in showing big tech companies can work differently to how they have so far," he added. "The jury is still out on the former, but the latter experiment is now looking very much like a failure."
Altman's departure showed an "organization that was meant to align superintelligent AI with humanity failing to align the values of even its own board members and leadership," Steen Levy wrote for Wired. Under his leadership, fostering the "profit-seeking component to the nonprofit project turned it into an AI powerhouse." The idea was that launching more products would "provide not only profits but also opportunities to learn how to better control and develop beneficial AI." With the board moving to fire the driving force being the commercialization, "it’s unclear whether the current leadership thinks that can be done without breaching the project’s original promise to create AGI safely."
OpenAI's altruistic roots are 'unaligned' with its corporate interests
Altman's exit over the weekend was the "culmination of a power struggle between the company’s two ideological extremes," Karen Hao and Charlie Warzel wrote in The Atlantic. One side was "born from Silicon Valley techno-optimism, energized by rapid commercialization," while the other was "steeped in fears that AI represents an existential risk to humanity and must be controlled with extreme caution." While the two sides were able to coexist for years, that "tenuous equilibrium" broke with the release of ChatGPT and increased pressure for commercialization. This sent the company in opposite directions, "widening and worsening the already present ideological rifts," the pair added.
In the end, the tumultuous events of the weekend "showed just how few people have a say in the progression of what might be the most consequential technology of our age," Hao and Warzel noted. "AI’s future is being determined by an ideological fight between wealthy techno-optimists, zealous doomers and multibillion-dollar companies."
Despite setting out to resist giving the power of AI to big corporations, OpenAI's board members may have played right into that outcome. With Atlman getting scooped up by Microsoft and many employees threatening to join him, "you can make the case that Microsoft just acquired OpenAI for $0 and zero risk of an antitrust lawsuit," Ben Thompson wrote for Stratchery." Microsoft already owns a perpetual license to all OpenAI intellectual property, "short of artificial general intelligence," Thompson explained. OpenAI, an "entity committed by charter to the safe development of AI," essentially "handed off all of its work" to "one of the largest for-profit entities on earth," Thompson mused. "Or in an AI-relevant framing, the structure of OpenAI was ultimately misaligned with fulfilling its stated mission."
Sign up for Today's Best Articles in your inbox
A free daily email with the biggest news stories of the day – and the best features from TheWeek.com
Theara Coleman has worked as a staff writer at The Week since September 2022. She frequently writes about technology, education, literature and general news. She was previously a contributing writer and assistant editor at Honeysuckle Magazine, where she covered racial politics and cannabis industry news.
-
What are the different types of nuclear weapons?
The Explainer Speculation mounts that post-war taboo on nuclear weapons could soon be shattered by use of 'battlefield' missiles
-
Floral afternoon teas to enjoy during the Chelsea Flower Show
The Week Recommends These are the prettiest spots in the city to savour a traditional treat
-
How to plan a trip along the Mississippi River
The Week Recommends See this vital waterway from the Great River Road
-
Is Apple breaking up with Google?
Today's Big Question Google is the default search engine in the Safari browser. The emergence of artificial intelligence could change that.
-
Inside the FDA's plans to embrace AI agencywide
In the Spotlight Rumors are swirling about a bespoke AI chatbot being developed for the FDA by OpenAI
-
Digital consent: Law targets deepfake and revenge porn
Feature The Senate has passed a new bill that will make it a crime to share explicit AI-generated images of minors and adults without consent
-
AI hallucinations are getting worse
In the Spotlight And no one knows why it is happening
-
Deepfakes and impostors: the brave new world of AI jobseeking
In The Spotlight More than 80% of large companies use AI in their hiring process, but increasingly job candidates are getting in on the act
-
Secret AI experiment on Reddit accused of ethical violations
In the Spotlight Critics say the researchers flouted experimental ethics
-
Fake AI job seekers are flooding U.S. companies
In the Spotlight It's getting harder for hiring managers to screen out bogus AI-generated applicants
-
How might AI chatbots replace mental health therapists?
Today's Big Question Clients form 'strong relationships' with tech