How to Check Your Own AI Employer Visibility in 15 Minutes
By Jordan Ellison
A Partial Version of This Audit Takes 15 Minutes. Most CHROs Have Not Run It.
You can run a credible first-pass check on your company's AI employer visibility today, in about 15 minutes, without buying anything. The full version is what an analyst-led AI Visibility Diagnostic does over 10 business days -- but the 15-minute version is enough to tell you whether you have a problem worth investigating.
Most heads of talent have not run it. The reason is rarely lack of curiosity -- it is that no one has spelled out the steps. So here they are.
You need 15 minutes, a browser, free-tier access to ChatGPT, Claude, Gemini, and Perplexity, two named talent competitors -- the companies you actually lose candidates to, not the ones in your industry analyst report -- and a blank document or screenshot folder.
Step 1: Pick 4-5 Query Categories Candidates Actually Use
Candidates are not searching "best places to work" in the abstract. They run specific queries shaped by their stage in the candidate decision journey. Pick 4-5 from this list and write each one out for your company:
- Discovery: "best [your sector] companies for [career stage]"
- Consideration: "what's it like to work at [your company]"
- Evaluation: "[your company] interview process" or "[your company] vs [competitor] for [role type]"
- Commitment: "[your company] salaries [role]"
Translate the placeholders to your sector. In healthcare it might be "best health systems for early-career nurses." In financial services, "best asset managers for mid-career investment professionals." In CPG, "best consumer brands for senior brand managers."
Five queries is enough. This is signal, not statistical significance.
Step 2: Run Each Query Across All Four Major AI Systems
Run every query in ChatGPT, Claude, Gemini, and Perplexity. All four -- not just the one you personally use.
This is the step most self-checks skip, and the most important one. The four systems do not produce the same answer. They draw from different citation sources, weight them differently, and synthesize differently. A company named confidently in ChatGPT may be missing entirely in Perplexity. A company described positively in Claude may be described with a stale anchor in Gemini. The variance is the finding.
Step 3: Capture and Compare
Screenshot every response, or paste it into a document. For each one, note three things:
- Are you named? Yes / no. At Discovery this is the binary that matters most -- if AI generates a list of companies and you are not on it, you are absent from a moment of consideration that does not happen anywhere else.
- What sources does the AI cite? Most systems will name at least some, especially Perplexity. Note which platforms come up: Glassdoor, Indeed, Comparably, industry trade publications, subreddits, vertical employer profiles, press, third-party compensation databases.
- What is the narrative? "A growing company in the space" is thin. "Known for [specific cultural program] and [named compensation philosophy]" is rich. The difference is what AI has been able to find.
A company named in 5 of 5 queries with thin generic descriptions has a different problem than one named in 2 of 5 with rich specific ones. Both are real, with different remediation paths.
Step 4: Note the Citation Gaps
For each query, look at where AI is sourcing its information. Then ask: where is your company in those sources?
If AI cites Comparably for a competitor's culture and you have no Comparably profile, that is a citation gap. If AI references a competitor's recent press and your most recent coverage is from 2023, that is a recency gap. If AI cites third-party compensation databases for one company and describes another's pay as "competitive," the second has a compensation citation gap.
You will not catch every gap in 15 minutes. You will catch the obvious ones, and that is enough for the self-check.
Step 5: Compare Against Two Competitors
Now run the same five queries against two competitors -- same systems, same questions, with their name substituted for yours.
Side by side, the gap reveals itself. A competitor may be named in 4 of 5 Discovery queries where you are named in 1. Both of you may be named, but their narrative is two paragraphs of specific cultural and compensation detail and yours is one sentence. AI may cite four sources for them and one for you. One competitor may be invisible too -- its own competitive insight.
Write down what you find. The pattern is the point.
What the 15-Minute Check Does NOT Capture
The self-check produces real findings. It also has structural limits that an analyst-led assessment does not.
It does not capture persona-specific journeys. A senior IC asks different questions than an early-career candidate. An executive runs different queries than a manager. The persona-specific version of a query often produces a meaningfully different answer, and that is where role-specific competitor leak shows up.
It mostly covers Discovery. DIY self-checks default to Discovery-style queries because they are easiest to phrase. The full candidate decision journey has four stages -- Discovery, Consideration, Evaluation, Commitment -- and visibility patterns differ across them.
It is shallow on volume. Five queries gives signal, not statistical confidence. A 40-query, 4-model, 3-persona scan produces 480 captured responses. The variance at that scale -- which queries you are missing from, which personas you are invisible to, which sentiment patterns are model-specific -- is not visible at 5.
It has no captured-evidence audit trail. When you take a concerning finding to your CMO, CEO, or board, the question that comes back is "how representative is this?" -- and screenshots cannot answer it. An analyst-grade Diagnostic captures every response, scores it against published criteria, and produces a deliverable you can circulate without further translation.
Use the self-check to decide whether the topic is worth more analyst time, not to decide what your remediation plan should be.
Where This Sits in Your Existing Employer Brand Program
AI employer visibility is a measurement layer above your existing employer brand work, not a replacement for it. The careers site, the candidate value proposition refresh, the recruitment marketing campaigns, the Glassdoor management -- those programs produce the inputs AI synthesizes. If the work has been strong, the citation ecosystem has something to draw from. If thin, AI will have less to work with and the narrative will reflect that.
What is new is not that the underlying work matters. It is that no one has been formally measuring whether AI is finding it, citing it, and synthesizing it accurately. The 15-minute self-check is the cheapest version of that measurement. The Diagnostic is the rigorous one.
What to Do With What You Find
If you are named confidently across all five queries, with rich narratives, citing diverse sources, and outperforming both competitors -- you do not have an immediate AI visibility problem. Re-run quarterly; presence today is not presence in 12 months.
If the gaps are obvious -- you are absent from Discovery, your narrative is thin, your competitors are clearly winning the synthesized comparison -- you have material to work with.
If the gaps look concerning enough that you would want to bring something to your CEO, CMO, or board, the structured version is the next step: 40 candidate-intent queries across all four AI systems, 3 candidate personas scoped to your job category, 3 named competitors, 480 captured responses, and an analyst-written report with at least 10 material findings each backed by data evidence. That is the AI Visibility Diagnostic -- $4,900, 10 business days, full refund if it surfaces fewer than 10 material findings, with the fee crediting 100% toward a deeper engagement within 60 days.
Run the 15-minute check first. The Diagnostic is more useful when you arrive with your own preliminary observations than when you arrive cold.
The self-check is best at reframing the question. Most heads of talent come into the topic asking "is AI a real channel yet?" and leave the 15-minute exercise asking "why does AI describe my closest competitor with this much specificity, and me with this little?" That second question is the one worth answering carefully.