I was in a subreddit with a group of senior engineers this weekend, listening to them share stories from recent interviews. One of them described what had become a pattern. Two interviews, back to back. Both opened by asking if he knew how to utilize AI tools in his workflow. Both followed by handing him a whiteboard and watching him code unassisted. The same skills they expected him to use forty hours a week were suddenly forbidden for the next forty minutes. "It makes no sense," he said, "to force us to use these tools on the job, which actively erode our ability to code by hand, but simultaneously expect us to do technical assessments unassisted." Around the room, heads nodded. They had seen it too.
The comment wasn't angry. It was exhausted. That distinction matters.
You're asking engineers if they know how to use AI tools. You expect it. Everyone does now. Then you're testing them without those tools.
Two interviews. Both asked about AI fluency. Both handed candidates whiteboards and watched them code unassisted. The same skills you expect them to use forty hours a week ... suddenly forbidden for the next forty minutes.
You're testing whether they can perform in a mode that your own process makes optional.
Here's what those interviews are actually testing. Not problem-solving ability. Not judgment about when to reach for a tool. Not the skill those engineers will use to contribute to your team. You are testing whether they can perform in a mode that your own process makes optional.
The engineer sees it. He uses AI on the job because you require it. Then he sits in your interview pretending that requirement doesn't exist. Performing a skill you've systematically made unnecessary.
The Gap Between Workflow and Assessment
I keep returning to a thread that surfaced from over five hundred experienced engineers, many in senior and staff roles, describing what happens when organizations roll out AI tools through mandate rather than through changed systems. The stories were a catalog of friction points. Governance without standards. Access without enablement. Speed without comprehension.
The hiring problem belongs in that catalog. You added the accelerant without changing how you evaluate the work.
The forty-hour week and the forty-minute assessment have diverged so completely that the interview no longer predicts job performance.
A sorting algorithm written on a whiteboard tells you something about how a person thinks under artificial constraints. It tells you almost nothing about how they will perform when their actual job involves reviewing AI-generated service boundaries, debugging agent-orchestration failures, or deciding whether a generated refactor preserves the semantic intent of the original code. Those are the skills your team uses now. Those are the skills your interview is not designed to see.
The forty-hour week and the forty-minute assessment have diverged so completely that the interview no longer predicts job performance. It predicts something else entirely. Whether the candidate can tolerate theater. Whether they are willing to perform a skill you have quietly deprecated. Whether they have kept their manual coding sharp despite your own tooling making that maintenance unnecessary.
What the Double Standard Signals
Candidates read signals. They always have. The difference now is that the signal is contradictory and impossible to miss.
When you ask about AI fluency in screening, you signal that the tool matters. When you forbid the tool in assessment, you signal that you don't actually trust it. When you hire people who pass the whiteboard test, you signal that manual coding is the real standard. When you mandate AI usage on the job, you signal that manual coding is no longer sufficient.
The candidate sits in these contradictions and sees exactly what you're doing. Your organization has not thought through what it values. You are hiring for 2026 workflows with 2019 assessments. And you either don't see the mismatch or don't care enough to fix it.
The candidate isn't failing your process. Your process is failing the candidate.
Some companies have tried to bridge the gap by adding "AI pair programming" rounds where candidates work with the tool present. This is better than nothing, but it often reveals a different problem. The interviewers haven't actually defined what good AI-assisted work looks like. They know what bad manual code looks like. They don't know what good judgment looks like when evaluating generated output. So the round becomes another theater exercise. Can the candidate look comfortable using the tool? Can they talk through their process without saying anything that sounds alarming?
The evaluation criteria remain vague because the organization hasn't done the harder work. Defining standards for AI-assisted engineering is harder than defining standards for manual coding. It requires admitting that the definition of "good code" is in flux. It requires acknowledging that your senior engineers might not know either, not yet, not with confidence.
The Question Worth Asking
What would a repaired interview process look like? Not the complete answer. Just the first honest question.
Ask yourself which skills your team actually uses.
Be specific. When your engineers ship a feature this week, what percentage of the code was generated versus written? When they review a pull request, what are they actually evaluating? Syntax? Semantic intent? Architectural fit? The quality of the AI prompt that produced the change? How do they know whether a generated refactor is safe?
If you can't answer these questions clearly, your interview process is not broken by accident. It is broken because you have not yet defined what good performance means in the new workflow.
You don't need better interview questions. You need clearer standards for the work itself.
The engineer in that room knew something his interviewers didn't. He knew that the test he was taking had become a ritual without a purpose. He knew that the skills being evaluated were not the skills that would determine his success on the team. He knew, and he performed anyway, because that is what candidates do. They perform.
The question is whether the organizations doing the hiring know it too. Whether they have looked at their process and seen the drift. Whether they are willing to name the contradiction and build something more honest in its place.
The engineers aren't confused. They have been watching the gap widen between what companies say they value and what companies actually test. They have felt the absurdity of preparing for interviews by practicing skills their prospective employers have made obsolete.
They are waiting to see which organizations notice first.
