2023: the year of the AI boom
This year, generative artificial intelligence bypassed the metaverse and became the next big thing in tech
Generative artificial intelligence hit the scene in 2023 and quickly became the next big thing in tech. ChatGPT, an advanced chatbot created by OpenAI, a former nonprofit turned tech-industry unicorn, was at the center of the enthusiasm for AI. The company had a busy year, including sparking an AI arms race and co-founder Sam Altman's near ouster.
Here's a look at how generative AI took over the tech industry in 2023:
The start of the AI 'gold rush'
OpenAI wasn't expecting ChatGPT to be "much more than a passing curiosity among AI obsessives on Twitter," Charlie Warzel wrote for The Atlantic, but it surpassed expectations quickly. Within the first five days of its debut, 1 million users signed up. The advanced chatbot was supposed to be "the software equivalent of a concept car," Warzel added. "Instead, it became one of the most popular applications in the history of the internet." Other generative AI apps gained popularity, and ChatGPT's viral success fueled a swift pivot in Silicon Valley, signaling the beginning of an AI arms race. "The AI 'gold rush' is here," The Washington Post proclaimed at the start of the year.
The Week
Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.
Sign up for The Week's Free Newsletters
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
Generative AI isn't limited to chatbots or text generators. The internet has been flooded with AI-generated portraits, music and videos. The realm of possibilities with the budding technology seems nearly limitless, grabbing the attention of investors. This year, over 1 in 4 dollars invested in American startups went to an AI-related company, per data from Crunchbase. The AI gold rush also helped make Nvidia, which creates microchips needed to run AI, a trillion-dollar company.
With AI advancing rapidly, big tech companies had to move swiftly to capitalize on the momentum. After falling behind in recent years, Microsoft made a deal with OpenAI that "allowed the computer giant to leap over such rivals as Google and Amazon," said The New Yorker. After investing more than $3 billion since 2019, Microsoft reached another $10 billion deal with OpenAI in January. Over the last year, the company has integrated ChatGPT into its search engine, Bing, and released a fleet of AI chatbots called Office Copilots for its other products. Google executives declared a "code red" in response to ChatGPT and started fast-tracking their own AI projects. This led to a less-than-stellar debut of Google's chatbot Bard, which the company admittedly didn't believe was ready to be publicly available. Elon Musk, who also helped found OpenAI, introduced Grok, which he described as an AI chatbot with a "rebellious streak." Meta seemingly abandoned the metaverse and released its own chatbots on Instagram and Facebook in an attempt to court Gen Z users.
Experts sound the alarm about 'societal-scale risks'
While the AI arms race forged ahead rapidly with little-to-no guardrails, it wasn't long before the excitement turned to fear. People began to wonder whether these advanced apps would someday steal jobs and make human employees obsolete. ChatGPT caused some musings about the death of the high school English class. Creatives started pushing back against AI companies using their work to train their programs without permission. Several authors banned together to file lawsuits against Google and OpenAI, accusing them of using a trove of pirated books to train their large language models. Musicians pushed back against AI-generated impersonations. AI even played a significant role in this year's Hollywood writers' strike.
Experts also warned that the lack of regulation and the swift integration of generative AI everywhere could threaten humanity. In the more immediate sense, people worry that the underdeveloped technology is prone to "hallucinating" or presenting false information as fact. AI could also help perpetuate disinformation in the wrong hands, which many see as a threat to democracy. There also is an undercurrent of discrimination and bias that has some civil rights activists wary of the technology.
A free daily email with the biggest news stories of the day – and the best features from TheWeek.com
Some of the sternest warnings came from some of the industry's most prominent players. In March, about a thousand AI industry leaders, computer scientists and tech industry VIPs signed an open letter warning that AI was moving too fast, with too few regulations. The group included Elon Musk, Apple co-founder Steve Wozniak, AI pioneer Yoshua Bengio and Stability AI CEO Emad Mostaque. They called for companies to "immediately pause for at least six months the training of AI systems more powerful than GPT-4," or else "governments should step in and institute a moratorium."
A few months later, Geoffrey Hinton, known as the "godfather of AI" for his pioneering work on neural networks, retired from his position at Google to join the growing chorus of experts warning about the risks AI could pose to humanity. "It is hard to see how you can prevent the bad actors from using it for bad things," Hinton told The New York Times. He signed another one-line open letter released by the Center for AI Safety, a nonprofit organization, which warned that the "risk of extinction from AI" was on par with other "societal-scale risks, such as pandemics and nuclear war."
Politicians worldwide began taking steps to create regulations to help mitigate AI's risks. After months of closed-door meetings, President Biden unveiled an executive order to develop guidelines for safely working with AI. In November, 28 countries gathered in the United Kingdom for a two-day AI summit held by U.K. Prime Minister Rishi Sunak. Still, with the technology spreading so rapidly, regulators need help to keep up.
Theara Coleman has worked as a staff writer at The Week since September 2022. She frequently writes about technology, education, literature and general news. She was previously a contributing writer and assistant editor at Honeysuckle Magazine, where she covered racial politics and cannabis industry news.
-
Starbucks workers are planning their ‘biggest strike’ everThe Explainer The union said 92% of its members voted to strike
-
‘These wouldn’t be playgrounds for billionaires’Instant Opinion Opinion, comment and editorials of the day
-
The 5 best nuclear war movies of all time‘A House of Dynamite’ reanimates a dormant cinematic genre for our new age of atomic insecurity
-
Is AI to blame for recent job cuts?Today’s Big Question Numerous companies have called out AI for being the reason for the culling
-
‘Deskilling’: a dangerous side effect of AI useThe explainer Workers are increasingly reliant on the new technology
-
AI models may be developing a ‘survival drive’Under the radar Chatbots are refusing to shut down
-
Saudi Arabia could become an AI focal pointUnder the Radar A state-backed AI project hopes to rival China and the United States
-
How the online world relies on AWS cloud serversThe Explainer Chaos caused by Monday’s online outage shows that ‘when AWS sneezes, half the internet catches the flu’
-
AI is making houses more expensiveUnder the radar Homebuying is also made trickier by AI-generated internet listings
-
‘How can I know these words originated in their heart and not some data center in northern Virginia?’instant opinion Opinion, comment and editorials of the day
-
AI: is the bubble about to burst?In the Spotlight Stock market ever-more reliant on tech stocks whose value relies on assumptions of continued growth and easy financing
