Fake AI job seekers are flooding US companies
It's getting harder for hiring managers to screen out bogus AI-generated applicants


The introduction of generative artificial intelligence has complicated the job-seeking and hiring process, causing confusion as the line between human beings and AI gets thinner. In the hands of bad actors, generative AI enables an emerging security threat for companies seeking employees amid a flood of fake job seekers.
Fake job applicants 'ramped up massively'
Companies have long had to defend themselves from hackers "hoping to exploit vulnerabilities in their software, employees or vendors," but now "another threat has emerged," said CNBC. Hiring employers are being inundated by applicants "who aren't who they say they are," who are "wielding AI tools to fabricate photo IDs, generate employment histories and provide answers during interviews." The spike in fake AI-generated applicants means that by 2028, 1 in 4 job candidates globally will be bogus, according to research and advisory firm Gartner.
Gen AI has "blurred the line between what it is to be human and what it means to be machine," said the CEO and co-founder of voice authentication startup Pindrop, Vijay Balasubramaniyan, to CNBC. As a result, "individuals are using these fake identities and fake faces and fake voices to secure employment," sometimes going so far as "doing a face swap with another individual who shows up for the job." Hiring a fake job seeker can put the company at risk for malware ransom attacks and theft of trade secrets or funds.
Subscribe to The Week
Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.

Sign up for The Week's Free Newsletters
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
Industry experts said that cybersecurity and cryptocurrency firms have recently seen a surge in fake job seekers. Since the companies often hire remote roles, they are particularly alluring targets for bad actors. News of the issue surfaced a year ago, but the number of fraudulent job candidates has "ramped up massively" this year, said Ben Sesser, the CEO of BrightHire, to CNBC. Humans are "generally the weak link in cybersecurity," and since hiring is an "inherently human process," it has become a "weak point that folks are trying to expose.”
The fake applicants phenomenon "isn't limited to cybersecurity jobs," said Inc. Last year, the Justice Department alleged that "over 300 U.S. companies had accidentally hired impostors to work remote IT-related jobs." The employees were actually tied to North Korea, sending millions in wages home, which the DOJ alleged "would be used to help fund the authoritarian nation's weapons program."
Hiring managers in the dark
The fake employee industry has expanded to include criminal groups in Russia, China, Malaysia and South Korea, said Roger Grimes, a computer security consultant, to CNBC. Sometimes they will "do the role poorly," and then sometimes "they perform it so well that I've actually had a few people tell me they were sorry they had to let them go."
Despite the DOJ case and a few other publicized incidents, hiring managers at most companies are generally unaware of the risks of fake job candidates, according to BrightHire's Sesser. They are responsible for talent strategy, but "being on the front lines of security has historically not been one of them,” he said. “Folks think they're not experiencing it," but it is more likely that they are "just not realizing that it's going on.”
Dawid Moczadlo, cofounder of cybersecurity startup Vidoc Security Lab, recently posted a video on LinkedIn of an interview with a deepfake AI job candidate, "which serves as a master class in potential red flags," Fortune said. The audio and video of the Zoom call didn't quite sync up, and the video quality also seemed off. When the person was moving and speaking, there was "different shading on his skin," and it "looked very glitchy, very strange," Moczadlo said to Fortune. "Before this happened, we just gave people the benefit of the doubt, that maybe their camera is broken," he said. But after the incident, "if they don't have their real camera on, we will just completely stop" the interview.
Sign up for Today's Best Articles in your inbox
A free daily email with the biggest news stories of the day – and the best features from TheWeek.com
Theara Coleman has worked as a staff writer at The Week since September 2022. She frequently writes about technology, education, literature and general news. She was previously a contributing writer and assistant editor at Honeysuckle Magazine, where she covered racial politics and cannabis industry news.
-
White House tackles fake citations in MAHA report
speed read A federal government public health report spearheaded by Robert F. Kennedy Jr. was rife with false citations
-
Judge blocks push to bar Harvard foreign students
speed read Judge Allison Burroughs sided with Harvard against the Trump administration's attempt to block the admittance of international students
-
Trump's trade war whipsawed by court rulings
Speed Read A series of court rulings over Trump's tariffs renders the future of US trade policy uncertain
-
Google's new AI Mode feature hints at the next era of search
In the Spotlight The search giant is going all in on AI, much to the chagrin of the rest of the web
-
How the AI takeover might affect women more than men
The Explainer The tech boom is a blow to gender equality
-
Did you get a call from a government official? It might be an AI scam.
The Explainer Hackers may be using AI to impersonate senior government officers, said the FBI
-
What Elon Musk's Grok AI controversy reveals about chatbots
In the Spotlight The spread of misinformation is a reminder of how imperfect chatbots really are
-
Is Apple breaking up with Google?
Today's Big Question Google is the default search engine in the Safari browser. The emergence of artificial intelligence could change that.
-
Inside the FDA's plans to embrace AI agencywide
In the Spotlight Rumors are swirling about a bespoke AI chatbot being developed for the FDA by OpenAI
-
Digital consent: Law targets deepfake and revenge porn
Feature The Senate has passed a new bill that will make it a crime to share explicit AI-generated images of minors and adults without consent
-
AI hallucinations are getting worse
In the Spotlight And no one knows why it is happening