The Board Question Your CPO Cannot Answer: How Does AI Describe Us to Candidates?
By Jordan Ellison
The Question Boards Are Starting to Ask: "How Does AI Describe Us to Candidates Compared to Our Talent Competitors?"
Board prep for CPOs and CHROs is beginning to include a framing that did not exist 18 months ago: how does AI describe our company when candidates ask about working here, and how does that compare to the companies we lose talent to? It is not yet a standing item on every quarterly review, but it has shown up in enough preparatory documents this year to qualify as a leading indicator of where governance is heading.
Most CPOs are not ready to answer it. The existing employer brand scorecard -- careers page traffic, Glassdoor score, candidate Net Promoter, application volume -- does not measure the surface the question is asking about. The work brand and recruitment marketing teams have been doing is real and serious. The board question is about a different measurement layer, and that layer has not been formally instrumented in most companies.
Why This Is a Board Concern, Not an HR Initiative
The instinct is to file AI employer visibility under HR operations -- the same drawer as recruiter productivity tools and ATS upgrades. That filing misclassifies the exposure.
Board concern is driven by revenue exposure. Senior hires carry total-cost figures of one to three million dollars once ramp time, productivity gap, and replacement cost are included. C-level hires carry materially higher figures. The pipeline that produces those hires is increasingly mediated by AI -- candidates asking ChatGPT, Claude, Gemini, and Perplexity which companies are worth their time before they ever submit an application.
If your closest talent competitors are visible in those AI answers and you are not, you are losing pipeline you will never see in your applicant tracking system. The candidates who decide against your company before applying do not appear in your recruiter funnel. The leak is invisible to existing instrumentation -- which is exactly why it qualifies as a governance concern. The same logic made cybersecurity a board concern over the last decade: the risk started in IT, then escalated when financial exposure outpaced existing measurement. AI candidate-discovery is on the same trajectory, with a shorter runway.
A meaningful share of working professionals now consult ChatGPT, Claude, Gemini, or Perplexity during major career decisions. By the time your recruiter places the first call, the candidate has often already formed a synthesized impression of your company against your competitors. If competitors come back richer in that synthesis, the recruiter starts from behind -- and on the conversations that never happen, the company never knows there was a candidate to lose.
Why "We Have a Strong Employer Brand" Is Not the Answer
The first instinct in response is reassurance: we invest in employer brand, our careers page is strong, our Glassdoor scores are above category average. Those statements may all be true. They do not answer the question.
The employer brand work most companies are doing is real and serious. The brand, content, and recruitment marketing teams produce careers content, manage review platforms, run targeted campaigns, and refresh the candidate value proposition. That work is the input AI synthesizes from when AI synthesizes anything. What is new is that no one has been formally measuring whether AI is finding that work, citing it, and synthesizing it accurately compared to what AI is finding for your competitors. Glassdoor scores measure Glassdoor. Careers page traffic measures direct visitors. Application conversion measures candidates who already decided to apply. None of those measure what AI says before a candidate ever visits your careers page.
A board member hearing "we have a strong employer brand" in response is hearing the wrong scoreboard.
The Four Questions a Board Should Be Able to Ask Their CPO Right Now
Each sub-question has a "good" answer and an insufficient one. The gap between them reveals whether the company has actually measured this or is reasoning by analogy from the existing employer brand scorecard.
Question 1: How does AI describe our company to candidates today?
Good answer. "AI describes us as [specific characterization] in [X] of [Y] candidate-intent queries. The narrative is rich on [specific dimensions] and thin on [specific dimensions]. Here is the captured evidence."
Insufficient answer. "Our Glassdoor score is 4.2 and careers page traffic is up 18% year over year." Real metrics. They do not describe what AI says.
Question 2: How do we compare to our top three talent competitors in AI's answers?
Good answer. "Side by side across [specific queries], our closest talent competitors are named in [X] of [Y] queries where we are named in [Z]. Their narrative includes [specific elements] that ours does not."
Insufficient answer. "We benchmark our Glassdoor scores against industry peers and remain in the top quartile." Glassdoor benchmarking measures Glassdoor. It does not measure how AI compares you to talent competitors -- which are often different from product competitors. The companies you lose finance leaders to may not be the ones you compete with on enterprise contracts. Both lists belong on the table.
Question 3: What sources is AI drawing from to describe us?
Good answer. "AI cites [specific named platforms] when describing us -- primarily Glassdoor, our LinkedIn page, and one industry trade publication. It does not currently cite [specific platforms] it cites for our top competitors -- third-party compensation databases, vertical employer profiles, and recent press."
Insufficient answer. "We monitor reviews and respond to feedback." Review monitoring is real work. It does not characterize the citation ecosystem AI synthesizes from -- which extends well beyond review sites to include professional community platforms, vertical employer profiles, industry publications, third-party compensation data, and press coverage.
Question 4: Who owns this measurement?
Good answer. "[Named function or external partner] owns the measurement on a defined cadence. They report to [named executive] on defined milestones."
Insufficient answer. "It falls under our broader employer brand program." "Falls under" is not an accountability statement -- it is the absence of one. The work can sit inside the existing employer brand function -- that is a reasonable home for it -- but only if it is named, scoped, owned, and budgeted as a distinct deliverable. Without that, Questions 1, 2, and 3 will be answered the same way at the next review.
What to Bring to the Next Board Prep
The actionable version of "be ready for this question" is a one-page artifact: each of the four questions, with current-quarter data, dated, and a named owner.
| Question | Current measurement | Owner | Next review |
|---|---|---|---|
| How does AI describe us today? | [characterization + coverage data] | [function or partner] | [date] |
| How do we compare to top three talent competitors? | [side-by-side findings] | [function or partner] | [date] |
| What sources is AI drawing from? | [citation list, gaps named] | [function or partner] | [date] |
| Who owns this measurement? | [function, cadence, reporting line] | [named executive] | [date] |
A CPO who walks into board prep with that table populated has answered the question. A CPO who walks in without it is making a brand-scoreboard argument against a measurement-scoreboard question, and the board will notice.
How the Diagnostic Fits
The first three questions require evidence -- captured AI responses, scored characterizations, side-by-side competitor data, named citation sources -- that most companies have not produced internally. The work is doable in-house, but it fits awkwardly into a brand or recruitment marketing team's existing scope: hundreds of captured AI responses, scored against published criteria, do not assemble themselves between campaign briefs.
The AI Visibility Diagnostic is built to produce that evidence as a board-circulable artifact. Forty candidate-intent queries across ChatGPT, Claude, Gemini, and Perplexity; three personas scoped to the job category that matters most to the business; three named talent competitors; scored by an analyst. The deliverable is a report plus an executive Findings Brief written to be circulated to a CEO, CFO, or board without further translation -- directly addressing Questions 1, 2, and 3.
The terms are scoped to what an executive can authorize without a procurement cycle: $4,900, 10 business days, full refund if it surfaces fewer than 10 material findings. If the Diagnostic warrants deeper work, the fee credits 100% toward a Baseline within 60 days.
This is additive to the existing employer brand program, not a replacement for it. The Diagnostic tells the teams already producing employer brand work which AI surfaces are picking it up, which are not, and which citation sources need attention next.
For a starting point at no cost, the 15-minute self-check produces a partial first-pass answer to Question 1 -- enough to decide whether the analyst-grade version is worth the budget line.
The Closing Reframe
The board question that is coming -- how does AI describe us to candidates compared to our talent competitors -- is not an HR-ops question. It is a candidate-driven revenue exposure question, and it will be asked in the same quarterly review where the CFO discusses pipeline and the CMO discusses brand. The CPOs who answer it well in the next 12 to 18 months are the ones who started measuring this quarter -- not the ones who waited until the question was on the agenda.