Back to blog

The Candidate Decision Journey: How AI Shapes Where People Apply

By Jordan Ellison

What Is the Candidate Decision Journey?

The candidate decision journey is a four-stage framework that models how candidates evaluate employers through AI: Discovery, Consideration, Evaluation, and Commitment. Each stage involves distinct query types, different information needs, and different competitive dynamics. Visibility gaps at any stage reduce the number of candidates who progress to the next, creating a compounding pipeline loss that most companies cannot detect with existing tools.

This framework is not theoretical. It is derived from analysis of real candidate-intent queries -- the kinds of questions people actually type into ChatGPT, Claude, and Gemini when they are deciding where to work. Understanding these stages gives talent leaders a structured way to diagnose where their AI employer visibility is strong, where it breaks down, and what the pipeline impact actually is.

The Four Stages

Stage 1: Discovery

What the candidate is doing: Exploring the market. They have not decided where to apply. They may not even know your company exists.

What they ask AI:

  • "Best companies to work for in healthcare IT"
  • "Top fintech startups hiring data engineers"
  • "Companies with strong remote engineering culture"
  • "Best employers for product managers in New York"

What AI does: Generates a shortlist. Typically 5-12 companies, named specifically, often with a one-sentence description of each. The list is drawn from the AI model's synthesis of review sites, company profiles, press coverage, salary databases, and technical content.

What matters at this stage: Being named at all. Discovery is binary -- you are on the list, or you are not. There is no partial credit. If AI generates a list of "top fintech companies for engineers" and you are not on it, every candidate who asked that question will never ask about you by name.

Example from a real scan: When asked "best fintech companies for backend engineers," one AI model named 8 companies. Two were household names (Stripe, Plaid). Three were well-known mid-market players. Three were companies with strong engineering blog presence and active profiles on Built In and Levels.fyi. A competitor in the same market, with 3,000 employees and competitive compensation, was not named -- because it had no engineering blog, no Built In profile, and minimal Levels.fyi data. That company was invisible at Discovery for every AI-researching candidate who asked this type of question.

The cost of a Discovery gap: It is total. A candidate who does not discover you at Stage 1 will never reach Stage 2, 3, or 4. There is no downstream recovery. Discovery visibility is the single most important stage for pipeline volume.

Stage 2: Consideration

What the candidate is doing: They have a shortlist. They are gathering information about each company to narrow it down.

What they ask AI:

  • "What is it like to work at [Company]?"
  • "What does [Company] pay for senior product managers?"
  • "[Company] engineering culture"
  • "[Company] work-life balance"
  • "Is [Company] a good place to work?"

What AI does: Constructs a narrative. This is not a list -- it is a synthesized description of your company as an employer, drawn from every source the AI can access. The response typically covers culture, compensation, growth opportunities, leadership reputation, and notable strengths or weaknesses.

What matters at this stage: Accuracy, completeness, and favorability. The AI narrative becomes the candidate's mental model of your company. If AI describes your culture based primarily on three-year-old Blind posts and omits your recent engineering investments, that stale narrative is what the candidate carries into Stage 3.

Common failure modes at Consideration:

Failure modeWhat it looks likeRoot cause
Stale narrativeAI describes the company based on outdated informationNo recent structured content for AI to synthesize
Incomplete descriptionAI covers culture but has nothing on compensation or growthMissing presence on salary/career progression platforms
Negative framingAI leads with caveats or concernsNegative signals (Blind threads, layoff press) dominate the citation ecosystem
Generic descriptionAI gives a vague, undifferentiated summaryInsufficient distinctive content across the citation ecosystem

The cost of a Consideration gap: Candidates quietly remove you from their shortlist. They do not tell you. They do not apply and then decline. They simply move on to the next company on the list. This loss is invisible in your ATS data.

Stage 3: Evaluation

What the candidate is doing: Comparing their top 2-3 options directly. They are making a decision.

What they ask AI:

  • "[Company A] vs [Company B] for software engineers"
  • "Should I join [Company A] or [Company B]?"
  • "Pros and cons of working at [Company]"
  • "[Company A] or [Company B] -- better for career growth?"

What AI does: Provides a structured comparison. AI typically presents each company's strengths and weaknesses side by side, then offers a recommendation or framing ("Company A is better for X, Company B is better for Y"). This is the stage where visibility displacement has the most direct impact -- if AI frames your competitor more favorably in a head-to-head comparison, the candidate receives a clear signal to choose the other option.

What matters at this stage: Competitive positioning. This is not about whether AI mentions you -- at Stage 3, the candidate already knows your name. It is about how AI frames you relative to a specific alternative. The sources AI draws from for comparison queries tend to be more granular: salary comparison data from Levels.fyi, engineering culture signals from tech blogs and conference talks, growth trajectory indicators from press coverage and funding news.

Visibility displacement at Evaluation: If AI consistently positions your competitor as the stronger option across key dimensions (compensation, engineering quality, career growth), candidates receive a synthesized "recommendation" to choose the competitor. This is not a review site rating -- it is a narrative that integrates multiple dimensions into a single recommendation. It carries more persuasive weight than any individual data source because it appears comprehensive and objective.

The cost of an Evaluation gap: You lose candidates who were actively considering you. These are high-intent candidates -- they were on the verge of applying. Losing them at Stage 3 is more expensive per candidate than losing them at Stage 1, because you have already "won" their attention through Discovery and Consideration.

Stage 4: Commitment

What the candidate is doing: They have decided to pursue your company. They are preparing to apply or have already applied. They want logistics and validation.

What they ask AI:

  • "How to get a job at [Company]"
  • "What is the interview process at [Company]?"
  • "[Company] interview tips for engineers"
  • "[Company] hiring timeline"
  • "What to expect in a [Company] onsite interview"

What AI does: Provides practical information about the application and interview process. AI draws from interview review sites (Glassdoor interview reviews, Blind), company careers content (if it exists in a format AI can parse), and general guidance.

What matters at this stage: Accuracy. A candidate who has decided to apply to your company is not evaluating whether to -- they are preparing how to. If AI gives them outdated information about your interview process, incorrect details about your hiring timeline, or generic advice that does not match your actual process, it creates friction and may cause disengagement.

Common failure modes at Commitment:

  • AI describes an interview process you changed two years ago
  • AI reports a hiring timeline that does not match your current cadence
  • AI provides no company-specific information at all, defaulting to generic advice
  • AI surfaces outdated role descriptions or team structures

The cost of a Commitment gap: Lower conversion from "decided to apply" to "completed application." Candidates who arrive poorly prepared due to inaccurate AI information may also perform worse in interviews -- not because they lack skill, but because they were given the wrong preparation signals.

The Pipeline Collapse: How Gaps Compound

Each stage acts as a conversion gate. Candidates who are not visible at one stage cannot progress to the next. The compounding effect is severe.

Here is what pipeline throughput leakage looks like for a company with moderate gaps at each stage:

StageCandidates remainingDrop rateCandidates lost
Total AI-researching candidates1,000----
After Discovery40060% not mentioned600
After Consideration28030% drop due to weak narrative120
After Evaluation14050% choose competitor140
After Commitment11220% disengage28

Result: 11.2% of AI-researching candidates survive to application. The other 88.8% were lost at various stages -- and none of them appear in any ATS report, recruiter pipeline, or sourcing metric.

This is not a funnel visualization for a pitch deck. It is arithmetic. The specific percentages vary by company, industry, and role type, but the compounding structure does not. Small gaps at each stage produce large aggregate losses.

Where Most Companies Break Down

Based on assessments across multiple industries, the most common breakdown pattern is:

  1. Invisible at Discovery for non-branded queries. The company appears when candidates ask about it by name, but not when they ask "best companies for [role] in [industry]." This means the company is visible only to candidates who already know it exists -- and invisible to everyone else.

  2. Incomplete narrative at Consideration. AI has enough information to name the company, but not enough to construct a compelling description. The narrative is thin, generic, or based on a narrow set of sources (typically just Glassdoor reviews).

  3. Losing competitive comparisons at Evaluation. Competitors with broader citation ecosystem presence -- engineering blogs, Levels.fyi profiles, Built In presence, press coverage -- receive more favorable framing in head-to-head comparisons.

This pattern -- present but weak at Consideration, losing at Evaluation, and invisible for Discovery -- is the most common finding in AI employer visibility assessments. It suggests that most companies have enough presence to be known but not enough to be chosen.

Earned Visibility vs. Prompted Visibility

A useful distinction for understanding AI employer visibility at each stage:

Earned visibility is when AI names your company without the candidate asking about you specifically. This happens at the Discovery stage: the candidate asks "best companies for X" and AI includes you. Earned visibility is the hardest to achieve and the most valuable, because it means AI considers your company relevant enough to name unprompted.

Prompted visibility is when AI responds to a query that names your company specifically. This happens at Consideration, Evaluation, and Commitment: the candidate asks "what is it like to work at [your company]?" and AI provides a response. Prompted visibility is easier to achieve -- AI will attempt to answer any specific query -- but the quality and accuracy of the response varies enormously.

Many companies assume they have AI visibility because they can ask ChatGPT about themselves and get a response. That is prompted visibility. It does not mean they have earned visibility -- which is what drives pipeline volume from candidates who have not heard of them yet.

The distinction matters for strategy: improving earned visibility (Discovery stage) requires different actions than improving prompted visibility (Consideration, Evaluation, Commitment stages). Earned visibility depends on breadth and strength of presence across the citation ecosystem. Prompted visibility depends on accuracy and depth of information on specific platforms.

Using This Framework

The candidate decision journey is not a marketing model. It is a diagnostic tool. For talent acquisition leaders, it provides a structured way to answer three questions:

Where are we visible? Map your AI mention rate by stage. If you have a 60% mention rate overall but 15% at Discovery, the overall number is masking a critical gap.

Where are we losing? Identify the stages where candidates drop. If AI consistently positions a competitor more favorably at Evaluation, that is a specific, addressable problem -- not a vague "brand perception" issue.

What should we fix first? The framework provides a natural prioritization. Discovery gaps are the most damaging (total loss of unknown candidates). Evaluation gaps are the most commercially painful (losing candidates who were ready to choose you). Consideration gaps are the most common (incomplete narratives due to thin citation ecosystem presence).

For each gap, the remediation maps to specific platforms and content types -- not to generic "employer brand investment." A Discovery gap requires presence on the platforms AI draws from when generating industry lists. An Evaluation gap requires structured comparison content (salary data, engineering culture signals, growth indicators) on the platforms AI uses for head-to-head queries.

This is measurable, stage-specific, and actionable. That is what makes it different from traditional employer brand metrics, which tend to measure sentiment without connecting it to candidate behavior at a specific decision point.


Antellion maps AI employer visibility to each stage of the candidate decision journey using structured assessments across 120+ candidate-intent queries. To understand where your company breaks down, visit antellion.com.