How AI might influence democracy in 2024
Threat from bots and deepfakes stalks key elections around the world next year
Google will restrict its artificial intelligence chatbot in the run-up to the US election next year in an "abundance of caution" amid growing fears of disinformation and threats to democracy.
The tech giant plans to label any AI-generated content on its platforms, including YouTube, and specify where political ads have used digitally altered material. "Like any emerging technology, AI presents new opportunities as well as challenges," the company said in a statement. "But we are also preparing for how it can change the misinformation landscape."
It came as former justice secretary Robert Buckland has warned that the UK is not ready for a deepfake general election. The Tory MP is urging the government to do more to tackle what he sees as a "clear and present danger" to democracy, warning that realistic audio and video clips of politicians appearing to say things they did not say could be increasingly used. "The future is here," he said. "It's happening."
The Week
Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.
Sign up for The Week's Free Newsletters
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
How might AI influence elections?
Leaders and experts gathered at the UK's Bletchley Park in November for the world's first AI safety summit, with the UK, EU and US all setting wheels in motion for AI regulation and legislation. The UK's Government Office for Science released an accompanying report warning that generative AI could be used to mount "mass disinformation" by 2030. It could lead to the "erosion of trust in information", with "hyper-realistic bots" and "deepfakes" muddying the waters, said the report.
"Next year is being labelled the 'Year of Democracy'," said Marietje Schaake in the Financial Times, with key elections scheduled to take place in the UK, US, EU, India, Taiwan, Indonesia and potentially Ukraine. AI is "one of the wild cards that may well play a decisive role" in the votes, wrote Schaake, policy director at Stanford University's Cyber Policy Center.
Generative AI, "which makes synthetic texts, videos and voice messages easy to produce and difficult to distinguish from human-generated content, has been embraced by some political campaign teams". While much of generative AI's impact on elections is still being studied, "what is known does not reassure".
Truth "has long been a casualty of war and political campaigns", said journalist Helen Fitzwilliam in a piece for the Chatham House think tank, but now there is "a new weapon in the political disinformation arsenal". Generative AI tools can "in an instant clone a candidate's voice, create a fake film or churn out bogus narratives to undermine the opposition's messaging", wrote Fitzwilliam. "This is already happening in the US."
A free daily email with the biggest news stories of the day – and the best features from TheWeek.com
Taiwan's voters, who will choose the successor to President Tsai Ing-wen in January, are "expected to be the target of China's formidable army of about 100,000 hackers". About 75% of Taiwanese receive news and information through social media, so "the online sphere is a key battleground". AI can act as "a force multiplier, meaning the same number of trolls can wreak more havoc than in the past".
Days before the Slovakian election, fake audio recordings of Michal Šimečka, the leader of the Progressive Slovakia Party, were shared online, in which he was heard discussing plans to rig the ballot, said Politics Home's "The House" magazine. A similar occurrence with a fake audio clip of Labour leader Keir Starmer moved Conservative MP Simon Clarke to brand generative AI as "a new threat to democracy", said Tom Phillips, former editor of fact-checking organisation Full Fact. Although threats of disinformation and hoaxes aren't new, AI "lets you do it far quicker, far cheaper and at an unprecedented scale".
AI could also use automation to "dramatically increase the scale and potentially the effectiveness of behaviour manipulation and microtargeting techniques that political campaigns have used since the early 2000s", said political scientist Archon Fung and legal scholar Lawrence Lessig on The Conversation. Just as advertisers use browsing and social media history to target ads, an AI machine could pay attention to hundreds of millions of voters – individually.
What can be done?
"It would be possible to avoid AI election manipulation if candidates, campaigns and consultants all forswore the use of such political AI," said Fung and Lessig. "We believe that is unlikely." However, enhanced privacy protection would help, they wrote, as would election commissions.
Other possible steps to mitigate the threat include independent audits for bias, research into disinformation efforts and the study of elections that have taken place this year, noted Schaake, including in Poland and Egypt.
This month the EU reached a provisional deal on the Artificial Intelligence Act, agreeing to ensure that AI "respects fundamental rights and democracy". The EU's AI Act, due to be finalised before the European Parliament elections in June next year, would classify AI systems by level of risk and regulate depending on each category. The White House has also issued an executive order on secure and trustworthy AI and a blueprint for an AI Bill of Rights.
Ultimately, there are "reasons to believe AI is not about to wreck humanity's 2,500-year-old experiment with democracy", said The Economist. Although it is important to be mindful of the potential of AI to disrupt democracies, "panic is unwarranted".
Harriet Marsden is a senior staff writer and podcast panellist for The Week, covering world news and writing the weekly Global Digest newsletter. Before joining the site in 2023, she was a freelance journalist for seven years, working for The Guardian, The Times and The Independent among others, and regularly appearing on radio shows. In 2021, she was awarded the “journalist-at-large” fellowship by the Local Trust charity, and spent a year travelling independently to some of England’s most deprived areas to write about community activism. She has a master’s in international journalism from City University, and has also worked in Bolivia, Colombia and Spain.
-
Nigel Farage: was he a teenage racist?Talking Point Farage’s denials have been ‘slippery’, but should claims from Reform leader’s schooldays be on the news agenda?
-
Pushing for peace: is Trump appeasing Moscow?In Depth European leaders succeeded in bringing themselves in from the cold and softening Moscow’s terms, but Kyiv still faces an unenviable choice
-
Sudoku medium: November 29, 2025The daily medium sudoku puzzle from The Week
-
Spiralism is the new cult AI users are falling intoUnder the radar Technology is taking a turn
-
Is Apple’s Tim Cook about to retire?Today's Big Question A departure could come early next year
-
AI agents: When bots browse the webfeature Letting robots do the shopping
-
Why Trump pardoned crypto criminal Changpeng ZhaoIn the Spotlight Binance founder’s tactical pardon shows recklessness is rewarded by the Trump White House
-
Is AI to blame for recent job cuts?Today’s Big Question Numerous companies have called out AI for being the reason for the culling
-
‘Deskilling’: a dangerous side effect of AI useThe explainer Workers are increasingly reliant on the new technology
-
AI models may be developing a ‘survival drive’Under the radar Chatbots are refusing to shut down
-
Saudi Arabia could become an AI focal pointUnder the Radar A state-backed AI project hopes to rival China and the United States