OpenAI: the ChatGPT start-up now worth billions
The Elon Musk-founded company has secured investment from Microsoft as artificial intelligence chatbot takes the world by storm

ChatGPT was heralded as the “world’s first truly useful chatbot” after launching in November last year.
Amid “breathless predictions” about the potential impact of the artificial intelligence bot, said The Times, social media was flooded with examples of ChatGPT’s capabilities, including coding, essay-writing and generating pop lyrics in the style of Shakespeare. The system’s creators, OpenAI, claimed it had attracted more than a million regular users within little more than a week of being released.
And now Microsoft is getting in on the action, by pumping $10bn into OpenAI. The investment in the San Francisco-based start-up is Microsoft’s “biggest bet yet that artificial intelligence systems have the power to transform the tech giant’s business model and products”, said the Financial Times.
Subscribe to The Week
Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.

Sign up for The Week's Free Newsletters
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
What is OpenAI?
OpenAI was founded in 2015 by investor, programmer and blogger Sam Altman and other high-profile tech entrepreneurs including Tesla boss Elon Musk and PayPal co-founder Peter Thiel.
Altman, who remains CEO of OpenAI, was previously president of Y Combinator (YC), a tech start-up accelerator that has backed major companies ranging from Airbnb and Dropbox to Reddit and Twitch. He also co-founded free online dating platform OkCupid in 2011.
The OpenAI bosses’ stated aim is “to ensure that artificial general intelligence (AGI) – by which we mean highly autonomous systems that outperform humans at most economically valuable work – benefits all of humanity”. OpenAI’s “primary fiduciary duty is to humanity”, they emphasise in the company charter.
This charter is “so sacred that employees’ pay is tied to how well they adhere to it”, said MIT Technology Review’s AI editor Karen Hao. Although “the purpose is not world domination”, she wrote, “AGI could be catastrophic without the careful guidance of a benevolent shepherd”.
Sign up for Today's Best Articles in your inbox
A free daily email with the biggest news stories of the day – and the best features from TheWeek.com
OpenAI promotes itself as this shepherd and said the company was created as a non-profit in order to “build value for everyone rather than shareholders”. In a statement announcing the launch back in 2015, OpenAI also vowed to “freely collaborate with others across many institutions” and to “work with companies to research and deploy new technologies”.
Does OpenAI live up to its claims?
An investigation by MIT Technology Review uncovered “a misalignment between what the company publicly espouses and how it operates behind closed doors”, according to Hao. Former and current employees – many of whom reportedly “insisted on anonymity because they were not authorised to speak or feared retaliation” – were said to have portrayed a company “obsessed with maintaining secrecy, protecting its image, and retaining the loyalty of its employees”.
Even Musk has criticised OpenAI after quitting the board of directors 2018, a decision that the company said was to “eliminate potential future conflict” with the AI goals of Tesla.
In 2020, Musk tweeted that his confidence in OpenAI was “not high” when it came to safety. “OpenAI should be more open imo,” he wrote in response to MIT Technology Review’s investigation.
In a Twitter post shortly after the launch of ChatGPT, he wrote: “Need to understand more about governance structure & revenue plans going forward. OpenAI was started as open-source & non-profit. Neither are still true.”
Are there any other issues with ChatGPT?
Plenty, according to Gizmodo. The technology threatens to “kill the college essay and lead to other academic dysfunction”, “make human writers obsolete”, “generate factually inaccurate news articles (already happened)” and “cause a disinformation typhoon”. Concerns have also been raised that the easily accessible AI system could “democratise cybercrime” and help to “fuel easy malware creation”, said the site, as well as “get loads of people fired”.
OpenAI has faced further criticism after Time magazine reported that the company “used outsourced Kenyan labourers earning less than $2 per hour to make the chatbot less toxic”. Workers allegedly said there were “mentally scarred” after sifting through graphic images and disturbing text from the dark web to help build a tool to help build a tool that tags problematic content.
After the contractor cancelled the deal early, OpenAI insisted that “we take the mental health of our employees and those of our contractors very seriously”.
But the Partnership on AI, a coalition of AI organisations to which OpenAI belongs, told Time that “despite the foundational role played by these data enrichment professionals, a growing body of research reveals the precarious working conditions these workers face”.
“This may be the result of efforts to hide AI’s dependence on this large labour force when celebrating the efficiency gains of technology,” the coalition said. “Out of sight is also out of mind.”
-
Ione Skye's 6 favorite books about love and loss
Feature The actress recommends works by James Baldwin, Nora Ephron, and more
By The Week US
-
Book review: 'Miracles and Wonder: The Historical Mystery of Jesus' and 'When the Going Was Good: An Editor's Adventures During the Last Golden Age of Magazines'
Feature The college dropout who ruled the magazine era and the mysteries surrounding Jesus Christ
By The Week US
-
Not invincible: Tech burned by tariff war
Feature Tariffs on Asian countries are shaking up Silicon Valley, driving up prices and deepening global tensions
By The Week US
-
Fake AI job seekers are flooding U.S. companies
In the Spotlight It's getting harder for hiring managers to screen out bogus AI-generated applicants
By Theara Coleman, The Week US
-
How might AI chatbots replace mental health therapists?
Today's Big Question Clients form 'strong relationships' with tech
By Joel Mathis, The Week US
-
What are AI hallucinations?
The Explainer Artificial intelligence is known for making things up – and that can cause real damage
By Elizabeth Carr-Ellis, The Week UK
-
The backlash against ChatGPT's Studio Ghibli filter
The Explainer The studio's charming style has become part of a nebulous social media trend
By Theara Coleman, The Week US
-
Not there yet: The frustrations of the pocket AI
Feature Apple rushes to roll out its ‘Apple Intelligence’ features but fails to deliver on promises
By The Week US
-
OpenAI's new model is 'really good' at creative writing
Under the Radar CEO Sam Altman says he is impressed. But is this merely an attempt to sell more subscriptions?
By Theara Coleman, The Week US
-
Could artificial superintelligence spell the end of humanity?
Talking Points Growing technology is causing growing concern
By Devika Rao, The Week US
-
Space-age living: The race for robot servants
Feature Meta and Apple compete to bring humanoid robots to market
By The Week US