How AI might influence democracy in 2024
Threat from bots and deepfakes stalks key elections around the world next year

Google will restrict its artificial intelligence chatbot in the run-up to the US election next year in an "abundance of caution" amid growing fears of disinformation and threats to democracy.
The tech giant plans to label any AI-generated content on its platforms, including YouTube, and specify where political ads have used digitally altered material. "Like any emerging technology, AI presents new opportunities as well as challenges," the company said in a statement. "But we are also preparing for how it can change the misinformation landscape."
It came as former justice secretary Robert Buckland has warned that the UK is not ready for a deepfake general election. The Tory MP is urging the government to do more to tackle what he sees as a "clear and present danger" to democracy, warning that realistic audio and video clips of politicians appearing to say things they did not say could be increasingly used. "The future is here," he said. "It's happening."
Subscribe to The Week
Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.

Sign up for The Week's Free Newsletters
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
How might AI influence elections?
Leaders and experts gathered at the UK's Bletchley Park in November for the world's first AI safety summit, with the UK, EU and US all setting wheels in motion for AI regulation and legislation. The UK's Government Office for Science released an accompanying report warning that generative AI could be used to mount "mass disinformation" by 2030. It could lead to the "erosion of trust in information", with "hyper-realistic bots" and "deepfakes" muddying the waters, said the report.
"Next year is being labelled the 'Year of Democracy'," said Marietje Schaake in the Financial Times, with key elections scheduled to take place in the UK, US, EU, India, Taiwan, Indonesia and potentially Ukraine. AI is "one of the wild cards that may well play a decisive role" in the votes, wrote Schaake, policy director at Stanford University's Cyber Policy Center.
Generative AI, "which makes synthetic texts, videos and voice messages easy to produce and difficult to distinguish from human-generated content, has been embraced by some political campaign teams". While much of generative AI's impact on elections is still being studied, "what is known does not reassure".
Truth "has long been a casualty of war and political campaigns", said journalist Helen Fitzwilliam in a piece for the Chatham House think tank, but now there is "a new weapon in the political disinformation arsenal". Generative AI tools can "in an instant clone a candidate's voice, create a fake film or churn out bogus narratives to undermine the opposition's messaging", wrote Fitzwilliam. "This is already happening in the US."
Taiwan's voters, who will choose the successor to President Tsai Ing-wen in January, are "expected to be the target of China's formidable army of about 100,000 hackers". About 75% of Taiwanese receive news and information through social media, so "the online sphere is a key battleground". AI can act as "a force multiplier, meaning the same number of trolls can wreak more havoc than in the past".
Days before the Slovakian election, fake audio recordings of Michal Šimečka, the leader of the Progressive Slovakia Party, were shared online, in which he was heard discussing plans to rig the ballot, said Politics Home's "The House" magazine. A similar occurrence with a fake audio clip of Labour leader Keir Starmer moved Conservative MP Simon Clarke to brand generative AI as "a new threat to democracy", said Tom Phillips, former editor of fact-checking organisation Full Fact. Although threats of disinformation and hoaxes aren't new, AI "lets you do it far quicker, far cheaper and at an unprecedented scale".
AI could also use automation to "dramatically increase the scale and potentially the effectiveness of behaviour manipulation and microtargeting techniques that political campaigns have used since the early 2000s", said political scientist Archon Fung and legal scholar Lawrence Lessig on The Conversation. Just as advertisers use browsing and social media history to target ads, an AI machine could pay attention to hundreds of millions of voters – individually.
What can be done?
"It would be possible to avoid AI election manipulation if candidates, campaigns and consultants all forswore the use of such political AI," said Fung and Lessig. "We believe that is unlikely." However, enhanced privacy protection would help, they wrote, as would election commissions.
Other possible steps to mitigate the threat include independent audits for bias, research into disinformation efforts and the study of elections that have taken place this year, noted Schaake, including in Poland and Egypt.
This month the EU reached a provisional deal on the Artificial Intelligence Act, agreeing to ensure that AI "respects fundamental rights and democracy". The EU's AI Act, due to be finalised before the European Parliament elections in June next year, would classify AI systems by level of risk and regulate depending on each category. The White House has also issued an executive order on secure and trustworthy AI and a blueprint for an AI Bill of Rights.
Ultimately, there are "reasons to believe AI is not about to wreck humanity's 2,500-year-old experiment with democracy", said The Economist. Although it is important to be mindful of the potential of AI to disrupt democracies, "panic is unwarranted".
Sign up for Today's Best Articles in your inbox
A free daily email with the biggest news stories of the day – and the best features from TheWeek.com
Harriet Marsden is a writer for The Week, mostly covering UK and global news and politics. Before joining the site, she was a freelance journalist for seven years, specialising in social affairs, gender equality and culture. She worked for The Guardian, The Times and The Independent, and regularly contributed articles to The Sunday Times, The Telegraph, The New Statesman, Tortoise Media and Metro, as well as appearing on BBC Radio London, Times Radio and “Woman’s Hour”. She has a master’s in international journalism from City University, London, and was awarded the "journalist-at-large" fellowship by the Local Trust charity in 2021.
-
June 14 editorial cartoons
Cartoons Saturday's political cartoons include Donald's 30 dolls, a Flag Day fail and a MAGA Mayflower
-
5 jackbooted cartoons about L.A.'s anti-ICE protests
Cartoons Artists take on National Guard deployment, the failure of due process, and more
-
Some of the best music and singing holidays in 2025
The Week Recommends From singing lessons in the Peak District to two-week courses at Chetham's Piano Summer School
-
College grads are seeking their first jobs. Is AI in the way?
In The Spotlight Unemployment is rising for young professionals
-
Disney, Universal sue AI firm over 'plagiarism'
Speed Read The studios say that Midjourney copied characters from their most famous franchises
-
Learning loss: AI cheating upends education
Feature Teachers are questioning the future of education as students turn to AI for help with their assignments
-
AI: Will it soon take your job?
Feature AI developers warn that artificial intelligence could eliminate half of all entry-level jobs within five years
-
The rise of 'vibe coding'
In The Spotlight Silicon Valley rush to embrace AI tools that allow anyone to code and create software
-
Is China winning the AI race?
Today's Big Question Or is it playing a different game than the US?
-
The noise of Bitcoin mining is driving Americans crazy
Under the Radar Constant hum of fans that cool data-centre computers is turning residents against Trump's pro-cryptocurrency agenda
-
Google's new AI Mode feature hints at the next era of search
In the Spotlight The search giant is going all in on AI, much to the chagrin of the rest of the web