Fake AI job seekers are flooding US companies
It's getting harder for hiring managers to screen out bogus AI-generated applicants


The introduction of generative artificial intelligence has complicated the job-seeking and hiring process, causing confusion as the line between human beings and AI gets thinner. In the hands of bad actors, generative AI enables an emerging security threat for companies seeking employees amid a flood of fake job seekers.
Fake job applicants 'ramped up massively'
Companies have long had to defend themselves from hackers "hoping to exploit vulnerabilities in their software, employees or vendors," but now "another threat has emerged," said CNBC. Hiring employers are being inundated by applicants "who aren't who they say they are," who are "wielding AI tools to fabricate photo IDs, generate employment histories and provide answers during interviews." The spike in fake AI-generated applicants means that by 2028, 1 in 4 job candidates globally will be bogus, according to research and advisory firm Gartner.
Gen AI has "blurred the line between what it is to be human and what it means to be machine," said the CEO and co-founder of voice authentication startup Pindrop, Vijay Balasubramaniyan, to CNBC. As a result, "individuals are using these fake identities and fake faces and fake voices to secure employment," sometimes going so far as "doing a face swap with another individual who shows up for the job." Hiring a fake job seeker can put the company at risk for malware ransom attacks and theft of trade secrets or funds.
The Week
Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.

Sign up for The Week's Free Newsletters
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
Industry experts said that cybersecurity and cryptocurrency firms have recently seen a surge in fake job seekers. Since the companies often hire remote roles, they are particularly alluring targets for bad actors. News of the issue surfaced a year ago, but the number of fraudulent job candidates has "ramped up massively" this year, said Ben Sesser, the CEO of BrightHire, to CNBC. Humans are "generally the weak link in cybersecurity," and since hiring is an "inherently human process," it has become a "weak point that folks are trying to expose.”
The fake applicants phenomenon "isn't limited to cybersecurity jobs," said Inc. Last year, the Justice Department alleged that "over 300 U.S. companies had accidentally hired impostors to work remote IT-related jobs." The employees were actually tied to North Korea, sending millions in wages home, which the DOJ alleged "would be used to help fund the authoritarian nation's weapons program."
Hiring managers in the dark
The fake employee industry has expanded to include criminal groups in Russia, China, Malaysia and South Korea, said Roger Grimes, a computer security consultant, to CNBC. Sometimes they will "do the role poorly," and then sometimes "they perform it so well that I've actually had a few people tell me they were sorry they had to let them go."
Despite the DOJ case and a few other publicized incidents, hiring managers at most companies are generally unaware of the risks of fake job candidates, according to BrightHire's Sesser. They are responsible for talent strategy, but "being on the front lines of security has historically not been one of them,” he said. “Folks think they're not experiencing it," but it is more likely that they are "just not realizing that it's going on.”
A free daily email with the biggest news stories of the day – and the best features from TheWeek.com
Dawid Moczadlo, cofounder of cybersecurity startup Vidoc Security Lab, recently posted a video on LinkedIn of an interview with a deepfake AI job candidate, "which serves as a master class in potential red flags," Fortune said. The audio and video of the Zoom call didn't quite sync up, and the video quality also seemed off. When the person was moving and speaking, there was "different shading on his skin," and it "looked very glitchy, very strange," Moczadlo said to Fortune. "Before this happened, we just gave people the benefit of the doubt, that maybe their camera is broken," he said. But after the incident, "if they don't have their real camera on, we will just completely stop" the interview.
Theara Coleman has worked as a staff writer at The Week since September 2022. She frequently writes about technology, education, literature and general news. She was previously a contributing writer and assistant editor at Honeysuckle Magazine, where she covered racial politics and cannabis industry news.
-
Africa could become the next frontier for space programs
The Explainer China and the US are both working on space applications for Africa
-
Video games to curl up with this fall, including Ghost of Yotei and LEGO Party
The Week Recommends Several highly anticipated video games are coming this fall
-
‘Peak consumption has become the Holy Grail of the energy debate’
Instant Opinion Opinion, comment and editorials of the day
-
Albania’s AI government minister: a portent of things to come?
In The Spotlight A bot called Diella has been tasked with tackling the country's notorious corruption problem
-
The tiny Caribbean island sitting on a digital 'goldmine'
Under The Radar Anguilla's country-code domain name is raking in millions from a surprise windfall
-
GPT-5: Not quite ready to take over the world
Feature OpenAI rolls back its GPT-5 model after a poorly received launch
-
Deep thoughts: AI shows its math chops
Feature Google's Gemini is the first AI system to win gold at the International Mathematical Olympiad
-
The jobs most at risk from AI
The Explainer Sales and customer services are touted as some of the key jobs that will be replaced by AI
-
Why AI means it's more important than ever to check terms and conditions
In The Spotlight WeTransfer row over training AI models on user data shines spotlight on dangers of blindly clicking 'Accept'
-
Are AI lovers replacing humans?
Talking Points A third of Gen Z singles use tech as a 'romantic companion'
-
Palantir: The all-seeing tech giant
Feature Palantir's data-mining tools are used by spies and the military. Are they now being turned on Americans?