When AI interviews AI: Why your hiring funnel is broken, and how to fix it

Highlights

  • AI screening tools may misrepresent real candidate skills.
  • Cheating in technical assessments has surged to alarming rates.
  • Redesigning hiring processes can reduce AI’s impact on evaluations.

An interview scene highlighting the contrast between impressive resumes and underwhelming in-person skills. /> By Pratham Dugran<br><br>We posted a role. Resumes poured in. Our AI screening bot shortlisted the best. Those candidates cleared an online technical assessment with flying colours. Everything pointed to a strong hire.<br><br>Then came the in-person round.<br><br>One candidate, based in Bengaluru, had a resume that shone. Technical screening scores were outstanding. We flew the person to Delhi for an in-person assessment. Within the first thirty minutes, it became clear that the person sitting across the table bore little resemblance to the candidate on paper. The technical depth wasn’t there. The problem-solving instinct was missing. What we were looking at was the gap between a brilliant resume and an average candidate.<br><br><!– PROMOSLOT_M –><div class=” article-detail-ad-slot=”” captionrendered=”1″ data-src=”https://etimg.etb2bimg.com/photo/129727866.cms” height=”442″ loading=”eager” src=”https://hr.economictimes.indiatimes.com/images/default.jpg” width=”590″></img></p>
</div>
<p>This wasn’t a one-off. Across the batch, every candidate who had sailed through our AI-powered funnel struggled in person. The pattern was unmistakable.</p>
<p>When we investigated, the picture became clear. The resume our bot had screened was also built using AI—tools specifically designed to crack applicant tracking systems. And the online assessment? The candidate had used real-time AI cheating tools like Interview Coder, Interview Solver, and Cluely AI to generate answers invisibly during the test. Our AI was interviewing their AI. Neither was evaluating the human.</p>
<p>This isn’t anecdotal. It’s systemic.</p>
<p>CodeSignal’s February 2026 data shows cheating on proctored technical assessments has more than doubled in a single year—from 16% to 35%. For entry-level roles, 40%. In Asia-Pacific, 48%.</p>
<p>The tools enabling this are no longer underground. Cluely AI, built by a Columbia University student who used his own tool to land offers from Amazon and Meta, raised $20.3 million from Andreessen Horowitz. Leetcode Wizard, Interview Solver, and dozens of similar tools operate as invisible overlays during video interviews and online tests—undetectable by proctoring software.</p>
<p>Meanwhile, Gem’s 2025 Recruiting Benchmarks (140 million applications analysed) shows the average recruiter now handles 2.7 times more applications than in 2021, conducts 42% more interviews per hire, and takes 24% longer to close a role. We’re spending more and <a href=hiring worse.

Why traditional screening has stopped working

There’s a concept in economics called signalling theory, proposed by Nobel laureate Michael Spence. The idea is simple: in a job market, candidates invest in costly signals—degrees, certifications, polished resumes—that are harder for weaker candidates to produce. This cost difference is what lets employers separate strong candidates from weak ones.

AI has collapsed that cost difference to zero. When any candidate can produce a flawless resume for ₹500 a month and pass a coding test with an invisible AI copilot, the signal stops separating anyone. Strong and weak candidates look identical on paper. Economists call this a pooling equilibrium—everyone pools together, and you can’t tell who’s who.

Worse, this triggers what George Akerlof described as a market for lemons. When buyers (employers) can’t distinguish quality, the best sellers (top talent) leave the market because they’re being valued the same as everyone else. What remains is a progressively weaker pool—exactly what many hiring managers are experiencing today.

Add to this Goodhart’s Law—“when a measure becomes a target, it ceases to be a good measure”—and you see the full cascade. Resume keywords became ATS targets, so they stopped measuring skill. Online test scores became AI-assist targets, so they stopped measuring competence. Each metric, the moment candidates learned to game it, lost its informational value.

India amplifies every one of these dynamics. We produce 1.5 million engineering graduates a year, but only 16-26% are employable by industry standards. Our $6.5 billion coaching culture—built on “cracking” exams—has fused with generative AI. HirePro’s analysis of 4.3 million assessments found 30-50% of entry-level candidates cheat during online job assessments. This isn’t a few bad actors. It’s system-wide information collapse.

Hiring the candidate, not the resume

The answer is not better AI detectors. That’s an arms-race you will always lose—the cheating tools evolve faster than the detection tools. The answer is to redesign your hiring architecture so that gaming becomes irrelevant.

Here are four shifts that work:

First, assess the process, not the product. Stop evaluating what candidates produce (code files, written answers) and start evaluating how they produce it. Live pair programming, system design with real-time follow-up probes, and “unexpected questions” that break rehearsed scripts. Genuine expertise handles novelty; AI-assisted performance falls apart under it.

Second, extend the evaluation window. A 45-minute interview is a prediction. A two-week paid trial project is an observation. Contract-to-hire models, hackathon-based hiring, and structured probation periods transform hiring from guessing into knowing. TCS’s National Qualifier Test, conducted across 6,800 centres, cut recruitment cycle time by 66% and classroom training by 80%—proof that better assessment design works at Indian scale.

Third, make AI literacy the test, not the threat. Instead of trying to catch candidates using AI, assess whether they can work effectively with it. Can they prompt well? Can they evaluate and debug AI-generated output? Can they do the work when the copilot is taken away? The question shifts from “Did they cheat?” to “Can they deliver?”

Fourth, build internal talent markets. The most reliable signal of capability is demonstrated performance inside your organisation. Internal talent marketplaces—where existing employees are matched to new roles based on what they’ve actually delivered—bypass the adversarial dynamics of external hiring entirely. Unilever, Schneider Electric, and Standard Chartered already operate these at scale.

The real question

That Bengaluru-to-Delhi flight cost us a few thousand rupees. The real cost was weeks of recruiter time, panel hours, and pipeline capacity wasted on a candidate whose qualifications existed only in AI-generated text.

The question for every CHRO and Talent Acquisition Leader today is not “How do we catch cheaters?” It’s “How do we design systems where cheating is structurally irrelevant?” When AI can produce any signal, the only reliable signal is observed performance under real conditions. The future of hiring belongs to those who build for that truth.

The author is an HR leader.

DISCLAIMER: The views expressed are solely of the author and does not necessarily subscribe to it. will not be responsible for any damage caused to any person or organisation directly or indirectly.

Join the community of 2M+ industry professionals.

Subscribe to Newsletter to get latest insights & analysis in your inbox.

All about industry right on your smartphone!

  • Download the App and get the Realtime updates and Save your favourite articles.

Leave a Reply

Your email address will not be published. Required fields are marked *