How OpenAI went from an altruistic nonprofit to typical Big Tech startup
Internal tensions over the company prioritizing money over safety might be symptoms of a bigger issue
The story of OpenAI's meteoric rise in the artificial intelligence space took an unexpected turn over a tumultuous weekend, ending with co-founder Sam Altman's sudden ouster. His departure, made possible by the company's unique governance structure, illuminated an internal struggle between the company's nonprofit roots and the push for more commercialization.
The board has been relatively vague about the decision to fire him, stating that Altman was "not consistently candid in his communications with the board,” in the announcement on Friday. Though employees and investors rallied behind Altman to get him back in, the board ultimately hired former Twitch CEO Emmett Shear as interim CEO instead. Greg Brockman left his position as OpenAI's president in solidarity, and hundreds of employees threatened to leave the company if the board did not reinstate Altman and Brockman and resign. Hours after the board confirmed Altman would not return, Microsoft, a major investor in OpenAI, announced that it would hire Altman and Brockman to head its new advanced AI research lab.
While it's unclear exactly why Altman was fired, some say the chaotic turn of events is a microcosm of a larger debate over whether to prioritize safety over commercialization in artificial intelligence.
Subscribe to The Week
Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.
Sign up for The Week's Free Newsletters
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
Proof that OpenAI is no different from other Big Tech companies
At first, the founders set OpenAI up as a "true not-for-profit with the goal of advancing the introduction of a safe AGI," James Ball explained in his newsletter TechTris. While it originally had "no intention of focusing on the profit motive or on hefty returns from venture capitalists," that model did not last long. Still, at its core, OpenAI was an "attempt to build a big tech startup in which the founder/CEO didn’t wield unassailable power," Ball noted. The company is attempting to change the world by "building safe and revolutionary artificial intelligence models and in showing big tech companies can work differently to how they have so far," he added. "The jury is still out on the former, but the latter experiment is now looking very much like a failure."
Altman's departure showed an "organization that was meant to align superintelligent AI with humanity failing to align the values of even its own board members and leadership," Steen Levy wrote for Wired. Under his leadership, fostering the "profit-seeking component to the nonprofit project turned it into an AI powerhouse." The idea was that launching more products would "provide not only profits but also opportunities to learn how to better control and develop beneficial AI." With the board moving to fire the driving force being the commercialization, "it’s unclear whether the current leadership thinks that can be done without breaching the project’s original promise to create AGI safely."
OpenAI's altruistic roots are 'unaligned' with its corporate interests
Altman's exit over the weekend was the "culmination of a power struggle between the company’s two ideological extremes," Karen Hao and Charlie Warzel wrote in The Atlantic. One side was "born from Silicon Valley techno-optimism, energized by rapid commercialization," while the other was "steeped in fears that AI represents an existential risk to humanity and must be controlled with extreme caution." While the two sides were able to coexist for years, that "tenuous equilibrium" broke with the release of ChatGPT and increased pressure for commercialization. This sent the company in opposite directions, "widening and worsening the already present ideological rifts," the pair added.
In the end, the tumultuous events of the weekend "showed just how few people have a say in the progression of what might be the most consequential technology of our age," Hao and Warzel noted. "AI’s future is being determined by an ideological fight between wealthy techno-optimists, zealous doomers and multibillion-dollar companies."
Despite setting out to resist giving the power of AI to big corporations, OpenAI's board members may have played right into that outcome. With Atlman getting scooped up by Microsoft and many employees threatening to join him, "you can make the case that Microsoft just acquired OpenAI for $0 and zero risk of an antitrust lawsuit," Ben Thompson wrote for Stratchery." Microsoft already owns a perpetual license to all OpenAI intellectual property, "short of artificial general intelligence," Thompson explained. OpenAI, an "entity committed by charter to the safe development of AI," essentially "handed off all of its work" to "one of the largest for-profit entities on earth," Thompson mused. "Or in an AI-relevant framing, the structure of OpenAI was ultimately misaligned with fulfilling its stated mission."
Create an account with the same email registered to your subscription to unlock access.
Sign up for Today's Best Articles in your inbox
A free daily email with the biggest news stories of the day – and the best features from TheWeek.com
Theara Coleman has worked as a staff writer at The Week since September 2022. She frequently writes about technology, education, literature and general news. She was previously a contributing writer and assistant editor at Honeysuckle Magazine, where she covered racial politics and cannabis industry news.
-
5 cleverly clashing cartoons about the presidential debate
Cartoons Artists take on a deepfake debate, winners and losers, and more
By The Week US Published
-
The Pélicot case: a horror exposed
Talking Point This case is unusually horrifying, but the misogyny that enabled is chillingly common
By The Week UK Published
-
Beetlejuice Beetlejuice: pure 'nostalgia bait'
Talking Point Michael Keaton and Winona Ryder return for sequel to the 1988 cult classic
By The Week UK Published
-
AI and the 'cocktail party problem'
Under The Radar The human ear can naturally filter out background noise. Now technology has been developed to do the same
By Elizabeth Carr-Ellis, The Week UK Published
-
AI is cannibalizing itself. And creating more AI.
The Explainer Artificial intelligence consumption is outpacing the data humans are creating
By Devika Rao, The Week US Published
-
Questions arise over the use of an AI crime-fighting tool
Under the Radar The tool was used in part to send a man to prison for life
By Justin Klawans, The Week US Published
-
Future of generative AI: utopia, dystopia or up to us?
The Explainer Like most new technologies, the answer probably lies somewhere in between
By The Week UK Published
-
CrowdStrike: the IT update that wrought global chaos
Talking Point 'Catastrophic' consequences of software outages made apparent by last week's events
By The Week UK Published
-
Big Tech's answer for AI-driven job loss: universal basic income
In The Spotlight A new study reveals the strengths and limitations
By Joel Mathis, The Week US Published
-
The war against AI bots is still really about privacy versus money
The explainer Is this the real life? Is this technology?
By Devika Rao, The Week US Published
-
Why are facial recognition technology rules changing in Detroit?
Today's Big Question A wrongful arrest leads to a big settlement
By Joel Mathis, The Week US Published