How ChatGPT breathed new life into the internet search wars
The AI arms race is upon us
ChatGPT, the viral chatbot from artificial intelligence start-up OpenAI, has sparked a renewed battle for AI supremacy, in which tech giants Google and Microsoft are competing to use the technology to change how the world uses search engines. Here's everything you need to know:
How big of a deal is ChatGPT?
Since its debut in December, OpenAI's generative chatbot ChatGPT has become a global phenomenon. Just five days after its release, an alleged 1 million people had signed up to try the chatbot, which can quickly and seamlessly generate advanced content, like essays, poetry, and fiction. By January, ChatGPT had 100 million monthly active users, making it "one of the fastest-growing software products in memory," The New York Times reports. For comparison, it took nine months for TikTok to reach 100 million users and over two years for Instagram, per Reuters.
How has ChatGPT reignited the internet search wars?
The popularity of the software "has set off a feeding frenzy of investors trying to get in on the next wave of the AI boom," the Times writes. For instance, OpenAI recently inked a $10 billion deal with Microsoft and will also partner with BuzzFeed, which plans to use the company's technology to generate lists and quizzes. The announcement caused BuzzFeed's stock price to more than double.
Subscribe to The Week
Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.
Sign up for The Week's Free Newsletters
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
Amid the growing frenzy around ChatGPT, other tech companies have begun announcing their rival chatbots. Executives at Google declared a "code red" in response to OpenAI's software, fast-tracking the development of many AI products to close the widening gap between itself and its emerging competitors. Shortly after, the company unveiled and began offering select users a look at its own chatbot, Bard, which — similar to ChatGPT — uses information from the internet to generate textual responses to users' queries.
Then, in February, Microsoft announced that it would integrate ChatGPT into Bing, its search engine, and other products, to which Google replied by announcing that it would also integrate generative AI into its own search capabilities.
"The internet search wars are back," says The Financial Times' Richard Waters. Generative AI has "opened the first new front in the battle for search dominance since Google fended off a concerted challenge from Microsoft's Bing more than a decade ago." And for Google in particular, this arms race could pose a serious threat to its core search business, which relies heavily on digital ads. "Google has a business model issue," Amr Awadallah, a former Google employee who now runs Vectara, an LLM-powered search platform, told the Times. "If Google gives you the perfect answer to each query, you won't click on any ads."
Why else has Google found itself at a disadvantage?
The fact that Google has been put in the position to play catch-up is ironic, especially since the tech company was "early to the advanced conversational AI game," CNBC says. In fact, CEO Pinchai has strived to reorient Google as an AI-first company since he started in 2016.
In 2018, Google debuted Duplex, "a stunningly human-sounding AI service" programmed to mimic human verbal tics while making robocalls to restaurants that don't have online reservations. While many were "legitimately awestruck" by the program, others were "a bit disturbed and unsettled," Forbes reports. Media outlets expressed concern over the ethics of a program intentionally deceiving employees; at the time, NYU professors Gary Marcus and Ernest Davis called it "somewhat creepy" in an op-ed for the Times. And sociologist and writer Zeynep Tufecki tweeted, "Silicon Valley is ethically lost, rudderless, and has not learned a thing."
Though Google had faced similar criticisms over its Google Glass smart glasses, which debuted in 2012, "the Duplex debacle stung," Forbes notes. Instead of then flaunting its new pivot toward AI under Pichai, Google "became a monument to Silicon Valley's gee-whiz cluelessness: cool technology tethered to a lack of human foresight." Two former company managers told Forbes that the negativity surrounding the Duplex launch is "one of many factors that contributed to an environment in which Google was slow to ship AI products." You might also remember LaMDA, or Google's Language Model for Dialogue Applications, which became embroiled in controversy after a company engineer claimed the program was sentient. His claims were later denounced by those in the AI community. (LaMDA is responsible for the supporting technology at the heart of Bard.)
Controversies in Google's AI division also played a part in its now-lagging approach. After signing a deal with the Pentagon in 2018 to create technology for Project Maven, an attempt to use AI to improve drone strikes, Google faced criticism from its employees. After the pushback, the company declined to renew the contract and released an ethical guide for AI technology development called "AI Principles." In 2020, the company's Ethical AI leads, Timnit Gebru and Margaret Mitchell, were fired after publishing a paper that criticized the biases in the AI tech used in the Google search engine. Jeff Dean, head of Google Research, later conceded that the AI unit took "a reputational hit" after the firing.
"It's very clear that Google was [once] on a path where it could have potentially dominated the kinds of conversations we're having now with ChatGPT," Mitchell told Forbes. However, she added that a series of "shortsighted" decisions put the company "in a place now where there's so much concern about any kind of pushback."
What are the ethical and legal implications of AI-integrated search engines?
Despite the viral popularity of ChatGPT, questions surrounding the ethics of the powerful text generator remain, "especially since it is being taken to market at a breakneck speed," writes CNN analyst Oliver Darcy. "We are reliving the social media era," Beena Ammanath, leader of Trustworthy Tech Ethics at Deloitte and executive director of the Global Deloitte AI Institute, told Darcy. She warned that if serious precautions aren't taken, AI chatbots will cause "unintended consequences." Ammanath equated the rapid emergence of AI integration to "building Jurassic Park, putting some danger signs on the fences, but leaving all the gates open." She pointed out that scientists have yet to solve bias issues in AI, and the technology is also prone to conflating misinformation as fact.
"The challenge with new language models is they blend fact and fiction," Ammanath continued. "It spreads misinformation effectively. It cannot understand the content. So it can spout out completely logical-sounding content, but incorrect. And it delivers it with complete confidence." Case in point: Bard incorrectly answered a search prompt during a widely-shared promotional video as part of its launch. The flubbed response then caused a $100 billion drop in market value for Google's parent company, Alphabet, per Reuters. Company employees also criticized the incident, referring to it on an internal forum as "rushed," "botched," and "un-Googley."
With their recent tit-for-tat announcements, both Google and Microsoft show that they "understand well that AI technology has the power to reshape the world as we know it," Darcy says. But with so many kinks yet to be ironed out, he wonders: "Will they follow Silicon Valley's 'move fast and break things' maxim that has caused so much turmoil in the past?"
The looming issue of misinformation could also become a liability for Google and Microsoft as they change the way search engine results are presented, John Loeffler says in an op-ed for Tech Radar. By using AI to rewrite answers to queries, search engines "ultimately become the publishers of that content, even if they cite someone else's work as a source." In integrating AI tools and taking on the role of publisher, tech companies are assuming the legal responsibility that comes with potentially publishing misinformation. "The legal perils of being a publisher are as infinite as there are ways to libel someone or spread dangerous, unprotected speech," Loeffler writes, "so it's impossible to predict how damaging AI integration will be."
Sign up for Today's Best Articles in your inbox
A free daily email with the biggest news stories of the day – and the best features from TheWeek.com
Theara Coleman has worked as a staff writer at The Week since September 2022. She frequently writes about technology, education, literature and general news. She was previously a contributing writer and assistant editor at Honeysuckle Magazine, where she covered racial politics and cannabis industry news.
-
Raise your glass at these 7 hotel bars where the vibe is as important as the drinking
The Week Recommends Have a pisco sour in Peru and a Bellini in Rome. Or maybe run into Bruno Mars in Vegas.
By Catherine Garcia, The Week US Published
-
'The burden of the tariff would be regressive'
Instant Opinion Opinion, comment and editorials of the day
By Justin Klawans, The Week US Published
-
Should Sonia Sotomayor retire from the Supreme Court?
Talking Points Democrats worry about repeating the history of Ruth Bader Ginsburg
By Joel Mathis, The Week US Published
-
The rise of the world's first trillionaire
in depth When will it happen, and who will it be?
By Justin Klawans, The Week US Published
-
Google loses antitrust suit, declared 'monopolist'
Speed Read A federal court has ruled that Google illegally dominated the internet search industry
By Rafi Schwartz, The Week US Published
-
The surge in child labor
The Explainer A growing number of companies in the U.S. are illegally hiring children — and putting them to work in dangerous jobs.
By The Week US Published
-
Your new car may be a 'privacy nightmare on wheels'
Speed Read New cars come with helpful bells and whistles, but also cameras, microphones and sensors that are reporting on everything you do
By Peter Weber Published
-
Empty office buildings are blank slates to improve cities
Speed Read The pandemic kept people home and now city buildings are vacant
By Devika Rao Published
-
Why auto workers are on the brink of striking
Speed Read As the industry transitions to EVs, union workers ask for a pay raise and a shorter workweek
By Joel Mathis Published
-
American wealth disparity by the numbers
The Explainer The gap between rich and poor continues to widen in the United States
By Justin Klawans Published
-
Cheap cars get run off the road
Speed Read Why automakers are shedding small cars for SUVs, and what that means for buyers
By Joel Mathis Published