AI in the applicant pool: How to overcome this new hiring risk

From AI-generated personas to real individuals faking credentials, fraudulent candidates are increasingly slipping through the cracks, particularly in engineering and remote roles. According to Gartner predictions, by 2028, one in four job applicants globally will be fake, largely driven by AI-generated profiles.
The risks range from wasted time to genuine cybersecurity threats. This trend highlights the importance of a balanced approach to AI, one that welcomes candidates who leverage AI to enhance efficiency while also implementing safeguards to detect and prevent misuse.
At Insight, we’ve heard about a number of incidents across our portfolio and the software industry, and our team of 130+ professionals in Onsite is actively advising on how to adapt to this new hiring risk proactively.
“We’ve always dealt with padded resumes, but AI and globally distributed hiring have supercharged the problem. What’s different now is the scale. Fraudulent candidates enter faster, look more credible, and waste real team time before red flags show up,” says R&D expert Rachel Weston Rowell.
Inside the new AI hiring threat landscape
As AI tools become more accessible and sophisticated, some job seekers are turning to automation not just to enhance their applications — but to deceive hiring systems. This includes everything from AI-generated résumés and deepfake video responses to real-time coaching during interviews and full proxy participation.
These tactics are especially prevalent in engineering and remote-first roles, where technical screening is often conducted virtually and identity verification can be looser by design. Across the Insight portfolio, we’ve observed an increase in these tactics, revealing just how quickly hiring fraud is evolving.
“AI-enabled fraud is a technical, people, and strategic challenge,” explains talent expert Bryan Powell. “Technically, strong screening tools, especially for Zoom interviews, are key to spotting red flags. On the people side, the risk lies in candidates who misrepresent themselves. Strategically, bad actors may aim to steal IP or harm customer trust and future revenue. Combating this threat requires a holistic approach.”
- AI use in live interviews: A candidate was flagged for long pauses during technical questions. When asked to turn on their camera, they refused. After further questioning, they admitted to using AI tools to generate responses in real time.
- Fabricated experience: A candidate received a verbal offer, but a final technical conversation with the CTO exposed significant skill gaps. Despite a strong resume and early screening success, their expertise was misrepresented.
- Video assessments: A candidate completed a recorded technical assessment via a third-party tool. During the onsite interview, the interviewers noted the individual appeared visibly different from the recording, raising questions around possible impersonation.
- Proxy interviewing: In a recent case, a candidate performed exceptionally well during virtual technical interviews. However, once hired, it became evident that they lacked the skills they had demonstrated. It was later discovered that someone else had completed the interview on their behalf.
It’s a bigger issue than just making a bad hire; it’s a potential security risk. At RSA 2025, security leaders talked openly about deepfake infiltration where real-time fake identities were being used to get access to company systems, often through remote engineering or IT roles.
Fast-growing software companies often operate with lighter controls, making them prime targets. That’s why understanding the stakes isn’t optional; it’s essential.
Using AI to code faster, organize smarter, or communicate more effectively is smart and encouraged. But, using AI or another person to misrepresent skills or identity crosses a clear ethical line.
That line matters, especially in high-velocity environments where engineers aren’t just writing code; they’re accessing critical data, making architectural decisions, scaling infrastructure, and safeguarding intellectual property. Integrity is a non-negotiable.
How high-performing teams are adapting
Top companies aren’t just reacting to the rise of AI-led fraud — they’re proactively redesigning their hiring playbooks to account for it. These teams recognize that while AI creates efficiency, it also introduces new vulnerabilities, especially in remote and technical hiring environments where traditional verification methods fall short.
“It’s always better to be on offense and implement robust interview processes and screening technologies that can prevent potential breaches with bad actors,” says Powell.
- Educate hiring teams: Train your hiring teams on how to spot red flags and be transparent with candidates about your verification processes.
- Use structured interviews: Assign multiple interviewers, repeat key questions, and capture screenshots as needed. For later stages, consider in-person interviews.
- Revamp assessments: Utilize AI-powered assessment platforms with embedded proctoring to ensure candidate identity and detect misconduct (e.g. AI-generated code, external assistance).
- Evaluate advanced sourcing platforms: There are tools to help you detect AI-generated profiles and synthetic identities. These platforms analyze patterns and metadata to flag suspicious candidates early in the process.
- Consider identity verification solutions: Incorporate identity checks throughout the hiring process. Start by reaching out to your ATS provider to understand existing integrations before evaluating standalone tools.
If this trend alert resonates, our Insight Onsite team has experts who can help navigate and mitigate risk. Reach out to your board member for support and next steps.






