How OpenAI went from an altruistic nonprofit to typical Big Tech startup

Internal tensions over the company prioritizing money over safety might be symptoms of a bigger issue

Former OpenAI CEO Sam Altman speaks during the OpenAI DevDay event
Altman's determination to improve profits may have caused his unexpected ouster
(Image credit: Justin Sullivan / Getty Images)

The story of OpenAI's meteoric rise in the artificial intelligence space took an unexpected turn over a tumultuous weekend, ending with co-founder Sam Altman's sudden ouster. His departure, made possible by the company's unique governance structure, illuminated an internal struggle between the company's nonprofit roots and the push for more commercialization. 

The board has been relatively vague about the decision to fire him, stating that Altman was "not consistently candid in his communications with the board,” in the announcement on Friday. Though employees and investors rallied behind Altman to get him back in, the board ultimately hired former Twitch CEO Emmett Shear as interim CEO instead. Greg Brockman left his position as OpenAI's president in solidarity, and hundreds of employees threatened to leave the company if the board did not reinstate Altman and Brockman and resign. Hours after the board confirmed Altman would not return, Microsoft, a major investor in OpenAI, announced that it would hire Altman and Brockman to head its new advanced AI research lab. 

While it's unclear exactly why Altman was fired, some say the chaotic turn of events is a microcosm of a larger debate over whether to prioritize safety over commercialization in artificial intelligence. 

Subscribe to The Week

Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.

SUBSCRIBE & SAVE
https://cdn.mos.cms.futurecdn.net/flexiimages/jacafc5zvs1692883516.jpg

Sign up for The Week's Free Newsletters

From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.

From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.

Sign up

Proof that OpenAI is no different from other Big Tech companies

At first, the founders set OpenAI up as a "true not-for-profit with the goal of advancing the introduction of a safe AGI," James Ball explained in his newsletter TechTris. While it originally had "no intention of focusing on the profit motive or on hefty returns from venture capitalists," that model did not last long. Still, at its core, OpenAI was an "attempt to build a big tech startup in which the founder/CEO didn’t wield unassailable power," Ball noted. The company is attempting to change the world by "building safe and revolutionary artificial intelligence models and in showing big tech companies can work differently to how they have so far," he added. "The jury is still out on the former, but the latter experiment is now looking very much like a failure." 

Altman's departure showed an "organization that was meant to align superintelligent AI with humanity failing to align the values of even its own board members and leadership," Steen Levy wrote for Wired. Under his leadership, fostering the "profit-seeking component to the nonprofit project turned it into an AI powerhouse." The idea was that launching more products would "provide not only profits but also opportunities to learn how to better control and develop beneficial AI." With the board moving to fire the driving force being the commercialization, "it’s unclear whether the current leadership thinks that can be done without breaching the project’s original promise to create AGI safely." 

OpenAI's altruistic roots are 'unaligned' with its corporate interests

Altman's exit over the weekend was the "culmination of a power struggle between the company’s two ideological extremes," Karen Hao and Charlie Warzel wrote in The Atlantic. One side was "born from Silicon Valley techno-optimism, energized by rapid commercialization," while the other was  "steeped in fears that AI represents an existential risk to humanity and must be controlled with extreme caution." While the two sides were able to coexist for years, that "tenuous equilibrium" broke with the release of ChatGPT and increased pressure for commercialization. This sent the company in opposite directions, "widening and worsening the already present ideological rifts," the pair added. 

In the end, the tumultuous events of the weekend "showed just how few people have a say in the progression of what might be the most consequential technology of our age," Hao and Warzel noted. "AI’s future is being determined by an ideological fight between wealthy techno-optimists, zealous doomers and multibillion-dollar companies."

Despite setting out to resist giving the power of AI to big corporations, OpenAI's board members may have played right into that outcome. With Atlman getting scooped up by Microsoft and many employees threatening to join him, "you can make the case that Microsoft just acquired OpenAI for $0 and zero risk of an antitrust lawsuit," Ben Thompson wrote for Stratchery." Microsoft already owns a perpetual license to all  OpenAI intellectual property, "short of artificial general intelligence," Thompson explained. OpenAI, an "entity committed by charter to the safe development of AI," essentially "handed off all of its work" to "one of the largest for-profit entities on earth," Thompson mused. "Or in an AI-relevant framing, the structure of OpenAI was ultimately misaligned with fulfilling its stated mission."

To continue reading this article...
Continue reading this article and get limited website access each month.
Get unlimited website access, exclusive newsletters plus much more.
Cancel or pause at any time.
Already a subscriber to The Week?
Not sure which email you used for your subscription? Contact us