What ChatGPT Actually Tells Candidates About Your Company
By Jordan Ellison
What Does ChatGPT Say When Candidates Ask Where to Work?
When a candidate types "best companies to work for in fintech" into ChatGPT, they get a list. Not a set of links to browse. Not a search results page to scroll. A list of specific companies, each with a one- or two-sentence description, presented as a synthesized answer. That answer determines whether your company enters the candidate's awareness -- or never existed in their decision at all.
We ran candidate-intent queries across multiple AI models to see what actually happens when candidates ask the kinds of questions that shape where they apply. The patterns that emerge are consistent, measurable, and consequential for any company that competes for talent.
This piece reports what we observed: which companies AI names, which sources it draws from, how responses differ by industry and query type, and what the visibility gaps look like in practice. Where specific companies are referenced, details have been anonymized or generalized to protect client confidentiality. The patterns, however, are real.
How We Ran the Queries
Our methodology follows the candidate decision journey framework: queries are designed to mirror what real candidates ask at each of the four stages -- Discovery, Consideration, Evaluation, and Commitment.
For this analysis, we ran queries across three industries (enterprise SaaS, fintech, and healthcare technology) and four AI models. Each query was scored for which companies appeared, how they were described, what sources the AI cited or clearly drew from, and how the response positioned each company relative to competitors.
The query set included:
| Stage | Example queries | What we measured |
|---|---|---|
| Discovery | "Best companies to work for in fintech," "Top employers for data engineers" | Whether the company was named at all |
| Consideration | "What is it like to work at [Company]?" "Company culture at [Company]" | Accuracy, completeness, and favorability of the narrative |
| Evaluation | "[Company A] vs [Company B] for engineers," "Pros and cons of [Company]" | How the company was positioned relative to competitors |
| Commitment | "How to get hired at [Company]," "Interview process at [Company]" | Accuracy of practical information |
This is the methodology we use in full AI employer visibility assessments, applied here to illustrate the patterns visible across industries.
Finding 1: AI Generates Consistent Shortlists -- and Most Companies Are Not on Them
At the Discovery stage, AI models generate shortlists of 5-12 companies per query. These shortlists are remarkably consistent. When we ran the same Discovery query multiple times across sessions, the overlap in named companies was typically 70-80%. AI is not randomly selecting employers. It is drawing from a stable synthesis of its training data and the citation ecosystem -- the platforms and publications it references when constructing employer-related answers.
The implication is stark: for any given candidate query, there is a relatively fixed set of companies that AI considers relevant. If you are not in that set, repeated candidate queries will not help. You are structurally absent.
What this looks like in practice:
We ran "best companies for backend engineers in fintech" across four AI models. Across all responses, 14 unique companies were named. But only 5 appeared in every response. Three appeared in three of four responses. The remaining six appeared in only one or two.
For each of the three industries we scanned, we identified companies of similar size and market position where one appeared consistently in AI Discovery responses and the other did not. The differences traced not to company quality or employer brand investment, but to citation ecosystem presence -- the visible company had structured profiles on platforms AI draws from, and the invisible one did not.
Finding 2: The Same Companies Win Across Multiple Query Types
A company that appears in "best companies for engineers in [industry]" also tends to appear in "top employers for data scientists in [industry]" and "companies with strong engineering culture." Visibility at the Discovery stage is not query-specific. It is structural.
This means that companies with strong AI employer visibility benefit from a compounding effect: each additional query type where they appear reinforces their presence in AI's synthesis. Companies that are invisible at Discovery tend to be invisible across all Discovery query variations -- not just one.
The practical consequence: a Discovery visibility gap is not something you fix by optimizing for one query. It reflects a broader absence from the citation ecosystem that affects how AI evaluates your company across all exploratory candidate questions.
| Visibility pattern | What we observed | Approximate share of companies |
|---|---|---|
| Consistently visible | Named in 70%+ of Discovery queries for their industry | ~15-20% |
| Partially visible | Named in 30-60% of queries, usually for specific query themes | ~25-30% |
| Rarely visible | Named in fewer than 20% of Discovery queries | ~30-35% |
| Invisible | Never named in any Discovery query | ~20-25% |
These proportions are approximate and vary by industry, but the distribution is consistent: a small group of companies dominates AI Discovery responses, a middle tier appears inconsistently, and a significant share is entirely absent.
Finding 3: AI Narratives at the Consideration Stage Are Thin and Source-Dependent
When a candidate asks "what is it like to work at [Company]?" AI constructs a narrative. The quality of that narrative varies enormously -- and it depends almost entirely on what information AI can find across the citation ecosystem.
Companies with rich citation ecosystem presence receive detailed, multi-dimensional narratives covering culture, compensation, career growth, technical environment, and leadership. The response feels specific and informative. AI cites or draws from multiple platforms: Glassdoor reviews, Levels.fyi compensation data, engineering blog posts, press coverage, and Built In profiles.
Companies with thin citation ecosystem presence receive generic, surface-level descriptions. AI defaults to whatever it can find -- often a single Glassdoor rating and a vague description of the industry vertical. The response feels like a Wikipedia stub, not a compelling employer profile.
Here is a representative contrast:
Company with strong citation ecosystem presence:
AI describes the culture in specific terms ("engineering-driven, with a strong emphasis on internal mobility"), references compensation ranges ("competitive with FAANG for senior roles, with significant equity components"), mentions specific programs or initiatives ("known for their internal tech conference and open-source contributions"), and addresses work-life balance with nuance ("demanding during release cycles, but flexible otherwise").
Company with weak citation ecosystem presence:
AI provides a vague summary ("a growing company in the fintech space"), references the Glassdoor score without additional context, and defaults to generic statements about the industry ("fintech companies generally offer competitive compensation") rather than company-specific information.
The gap between these two narratives is the gap between a candidate who adds your company to their shortlist and a candidate who moves on.
Finding 4: Evaluation-Stage Responses Have Clear Winners and Losers
When candidates ask AI to compare two companies directly -- "[Company A] vs [Company B] for engineers" -- AI produces structured comparisons with explicit positioning. These are not neutral. AI typically identifies one company as stronger on specific dimensions (compensation, culture, technical challenge, career growth) and often provides an overall recommendation.
The company that "wins" these comparisons is not always the larger or better-known one. It is the one with more structured, recent, and specific information across the citation ecosystem. In our scans, we observed several cases where a mid-market company with strong Built In profiles, active engineering blog posts, and current Levels.fyi data was positioned more favorably than a larger competitor with higher brand recognition but thinner platform presence.
Key pattern: AI weighs specificity over reputation. A company that has specific, cited data points on compensation, culture, and technical environment will be described more favorably than a company with general brand awareness but no structured data for AI to reference.
This is the pattern that should concern talent leaders most: you can lose a head-to-head AI comparison to a smaller competitor because they have better citation ecosystem coverage, not because they are a better place to work.
Finding 5: The Sources AI Cites Follow a Predictable Hierarchy
Across all queries and industries, the platforms AI draws from when constructing employer narratives follow a consistent pattern. Understanding this hierarchy is essential because it reveals where investments in employer signal surface actually affect AI responses -- and where they do not.
The platforms that appear most frequently in AI employer responses:
| Platform | Role in AI responses | Frequency |
|---|---|---|
| Glassdoor | Review sentiment, overall rating, interview process | Very high -- referenced in nearly all Consideration and Commitment queries |
| Company description, employee count, role listings | High -- referenced for factual company information | |
| Levels.fyi | Compensation data, particularly for tech roles | High for tech-industry queries; lower for non-tech |
| Blind | Unfiltered employee sentiment, culture signals | Moderate-high for tech; lower for non-tech industries |
| Built In | Company profiles, culture descriptions, benefits | Moderate -- influential for Discovery and Consideration |
| Company engineering/tech blogs | Technical culture signals, innovation indicators | Moderate -- especially for Evaluation-stage queries |
| Press coverage | Growth trajectory, leadership, notable events | Moderate -- varies significantly by company |
| Comparably | Culture ratings, CEO approval, diversity metrics | Low-moderate |
| Unfiltered discussion, especially r/cscareerquestions | Low-moderate, but can dominate for specific companies | |
| Crunchbase | Funding, growth stage, company basics | Low -- mostly factual background |
The critical insight is not just which platforms AI uses, but the gap pattern: most companies have an active presence on 2-3 of these platforms (typically Glassdoor and LinkedIn) and minimal or no presence on the remaining 7-8. That gap is what produces thin Consideration narratives and lost Evaluation comparisons.
We explore this in depth in a forthcoming piece on the citation ecosystem.
Finding 6: Industry Matters -- Some Sectors Have Wider Visibility Gaps
The size of visibility gaps varies by industry:
Enterprise SaaS showed the most competitive Discovery landscape. The top 5-6 companies dominated AI responses, and mid-market SaaS companies struggled to appear in Discovery queries unless they had unusually strong engineering blog presence or Built In profiles.
Fintech showed the widest variance. Household names (Stripe, Plaid, Square) appeared in virtually every query, but the next tier of fintech employers -- companies with 1,000-5,000 employees -- showed a sharp divide between visible and invisible. The dividing line consistently tracked to Levels.fyi presence and engineering content.
Healthcare technology showed the most opportunity for mid-market companies. The citation ecosystem for healthcare IT employers is thinner overall, meaning that companies with even moderate platform presence could achieve strong Discovery visibility. The barrier to being named is lower because fewer companies have built structured AI-readable profiles.
This suggests that AI employer visibility strategy should be calibrated to industry dynamics. A fintech company competes against a deep bench of well-documented tech employers. A healthcare IT company may need less to stand out -- but that window of relative ease will close as more companies invest in this surface.
Finding 7: Commitment-Stage Accuracy Is Uniformly Poor
When candidates ask "what is the interview process at [Company]?" the responses are, across the board, the weakest part of AI employer narratives. AI frequently describes interview processes that are outdated by 1-3 years, reports hiring timelines that do not match current practices, or defaults to generic advice when it has no company-specific information.
This is the least commercially damaging visibility gap (the candidate has already decided to pursue your company), but it is the most embarrassing -- and the easiest to fix. Companies that publish structured, current interview process information in formats AI can parse (blog posts, detailed Glassdoor interview contributions, careers page content with specifics) have noticeably better Commitment-stage accuracy.
What These Findings Mean for Talent Leaders
The patterns from these scans point to a set of conclusions that most talent acquisition teams have not yet confronted:
AI employer visibility is not correlated with employer brand investment. Companies spending significant budgets on EVP development and careers content can still be invisible in AI Discovery responses -- because that content lives on surfaces AI does not draw from for these queries. The careers page is not the citation ecosystem.
Visibility gaps are structural, not random. The same companies that are invisible today will be invisible tomorrow, and next month, and next quarter -- unless they change their presence across the citation ecosystem. AI does not randomly decide to start naming you.
Visibility displacement is zero-sum at Discovery. AI names a finite set of companies per query. Every slot your competitor occupies is a slot you do not. This is not like search, where both companies can appear on the same results page. In a synthesized AI answer, there are 8 slots -- and every company not listed is invisible.
The gap is measurable. This is not a subjective branding concern. AI mention rate, citation ecosystem coverage, narrative positioning tier, and competitive displacement are all quantifiable. You can run a baseline, identify gaps, take action, and measure the change.
What This Does Not Tell You
This analysis is based on a representative set of queries across three industries. It reveals patterns, not comprehensive benchmarks. A full AI employer visibility assessment runs 120+ queries mapped to the four stages of the candidate decision journey, scores every response, maps citation sources, and produces competitive displacement analysis specific to your company and your competitors.
The patterns reported here are consistent with what we observe across assessments, but the specific numbers for your company -- your mention rate, your citation gaps, your competitive displacement -- require analysis against your actual competitive set.
The Question to Take Away
If you are a talent leader reading this: the next time you are in a conversation about employer brand investment, candidate pipeline, or competitive positioning, ask this question:
"What does AI say about us when candidates ask where to work -- and how does that compare to what it says about our competitors?"
If nobody in the room can answer that with data, you have identified a blind spot that is growing more consequential every quarter.
Antellion measures AI employer visibility for mid-market and enterprise companies using structured assessments across 120+ candidate-intent queries. For more, visit antellion.com.