Importance Score: 75 / 100 🔴
AI-Driven Deepfakes Emerge as New Threat in Corporate Hiring Practices
The proliferation of generative AI tools has spawned a novel challenge for businesses: sophisticated fake job applicants. In a recent instance highlighting this emerging trend, cybersecurity firm Pindrop Security encountered a highly qualified candidate, seemingly ideal for a senior engineering position, who turned out to be an elaborate fraud utilizing deepfake software and other AI-generated deceptive technologies.
The Case of “Ivan X”
When voice authentication startup Pindrop Security advertised a job opening, a particular applicant stood out amongst a large pool of candidates. The individual, presenting himself as Ivan, a Russian coder, possessed seemingly impeccable credentials for the senior engineering role. However, during a video interview conducted last month, Pindrop’s recruitment personnel observed subtle discrepancies between Ivan’s facial expressions and his spoken responses.
According to Pindrop CEO and co-founder Vijay Balasubramaniyan, this raised suspicion, ultimately revealing “Ivan X” to be a scammer leveraging deepfake technology and generative AI in an attempt to fraudulently secure employment at the technology company. “Generative AI has blurred the distinction between human and machine,” Balasubramaniyan stated. “We are witnessing individuals employing these fabricated personas, counterfeit faces, and synthetic voices to gain employment, sometimes even resorting to face-swapping with another person for in-person appearances.”
Evolving Cybersecurity Risks
For years, companies have defended against cyberattacks from hackers seeking to exploit weaknesses in their digital infrastructure, personnel, or supply chain. A new vulnerability has now surfaced: job seekers who misrepresent their identities, utilizing AI tools to forge identification documents, fabricate employment histories, and generate convincing responses during interviews.
Research from Gartner, a research and advisory firm, indicates that the escalation of AI-generated profiles portends a significant future challenge, projecting that by 2028, one in four job candidates globally could be fraudulent.
The potential ramifications of hiring a fake job seeker are diverse and contingent on the individual’s motives. Once employed, these impostors could introduce malware to extort ransom from the organization, or pilfer sensitive data such as customer information, proprietary trade secrets, or financial assets, Balasubramaniyan explained. In numerous instances, these deceitful employees are simply seeking to illicitly collect a paycheck, he noted.
Surge in Fraudulent Applications
Cybersecurity and cryptocurrency businesses have experienced a notable increase in fake job applicants recently, industry sources informed CNBC. Given the prevalence of remote positions within these sectors, they represent attractive targets for malicious actors, these experts conveyed.
Ben Sesser, CEO of BrightHire, mentioned that he initially became aware of this issue a year prior, but the volume of fraudulent job candidates has dramatically increased this year. BrightHire assists over 300 corporate clients across finance, technology, and healthcare in evaluating prospective hires through video interviews.
“Humans often represent the weakest link in cybersecurity, and the hiring process, inherently human-centric with numerous interactions and personnel involved, has become an exploited vulnerability,” Sesser explained.
This issue extends beyond the technology sector. In May, the Justice Department alleged that over 300 U.S. companies, including a major national television network, a defense contractor, an automaker, and other Fortune 500 entities, inadvertently hired impostors with connections to North Korea for IT roles.
These individuals reportedly utilized stolen American identities to apply for remote positions and employed remote networks and other strategies to conceal their actual locations, according to the DOJ. They allegedly remitted millions of dollars in wages to North Korea, contributing to the nation’s weapons programs, the Justice Department stated.
This case, involving a network of alleged facilitators, including a U.S. citizen, exposed a fraction of what U.S. authorities characterize as a vast overseas network of thousands of IT professionals with North Korean ties. The DOJ has since filed additional cases related to North Korean IT workers.
The Expanding Landscape of Impersonation
The prevalence of fake job seekers persists, as indicated by the experiences of Lili Infante, founder and CEO of CAT Labs. Her Florida-based startup, operating at the intersection of cybersecurity and cryptocurrency, is particularly enticing to malicious actors.
“Each time we post a job advertisement, we receive approximately 100 applications from North Korean operatives,” Infante stated. “Their resumes are remarkably impressive, incorporating all relevant keywords for our search criteria.”
Infante explained that her company relies on identity verification services to filter out fraudulent candidates, a burgeoning industry sector that includes companies such as iDenfy, Jumio, and Socure.
The fake employee phenomenon has broadened beyond North Korean actors in recent years, now encompassing criminal organizations situated in Russia, China, Malaysia, and South Korea, according to Roger Grimes, a seasoned computer security consultant.
Ironically, some of these fraudulent employees could be considered high-performing individuals within most organizations, he observed.
“In some instances, their performance is substandard, but in others, they excel to such a degree that I’ve encountered individuals expressing regret at having to terminate their employment,” Grimes recounted.
His employer, the cybersecurity firm KnowBe4, disclosed in October that it had unknowingly hired a North Korean software engineer.
This individual utilized AI to alter a stock photograph, combined with a valid but stolen U.S. identity, successfully navigating background checks and four video interviews, the company reported. Discovery only occurred after the firm identified suspicious activity originating from his account.
Combating Deepfake Deception
Despite the DOJ case and a limited number of publicized incidents, hiring managers at the majority of companies generally lack awareness regarding the risks associated with fake job candidates, according to BrightHire’s Sesser.
“While they are responsible for talent acquisition strategies and other critical functions, serving as frontline security personnel has not traditionally been within their purview,” he noted. “Many believe they are not experiencing this issue, when in reality, it is more likely they are simply unaware of its occurrence.”
As deepfake technology continues to advance in sophistication, the problem will become increasingly challenging to mitigate, Sesser cautioned.
Regarding “Ivan X,” Pindrop’s Balasubramaniyan stated that the startup employed a novel video authentication program developed internally to confirm his deepfake fraudulence.
While “Ivan” claimed to be situated in western Ukraine, his IP address indicated a location thousands of miles eastward, potentially a Russian military facility near the North Korean border, according to the company.
Pindrop, supported by Andreessen Horowitz and Citi Ventures, was established over a decade ago to detect fraud in voice interactions, but may soon shift its focus to video authentication. Their clientele includes major U.S. banks, insurers, and healthcare providers.
“We can no longer rely solely on our senses,” Balasubramaniyan concluded. “Without technological safeguards, we are less effective than random chance.”