What Search Console misses
Google Search Console catches AI Overview impressions when Google surfaces them on the SERP. That is roughly 10% of the AI-citation surface in 2026. The other 90% — ChatGPT direct answers, Perplexity source citations, Claude in-answer quotes, Gemini, Copilot, Meta AI — is invisible to Search Console.
If your only measurement is GSC, you are flying blind on 90% of where the buyer actually researches.
The five metrics that matter
We track these on every Scale and Enterprise engagement:
Visibility — what percentage of tracked prompts return your brand at all, across LLMs. The Gofaizen & Sherle case hit 33.3% from a 0% baseline in 90 days.
Share of voice — among the top-5 brands in the niche, how big is your slice. G&S reached 9.6% — the largest in their niche of six entrenched competitors.
Sentiment — how positive is the framing AI uses to describe you. Score from 0-100. We pull this from the actual quotes returned.
Average position — when AI lists multiple brands, where do you sit. G&S reached #1.8 average — first or second mention.
Citation count — raw count of verified placements per week. This is the metric that drives the Performance pricing tier.
The tooling stack
Searchable Agent — the one tool we run on every engagement. It tracks the five metrics across ChatGPT (gpt-4o, gpt-4.5), Perplexity, Gemini, Claude and Google AIO. Weekly snapshots, sentiment scoring, citation extraction. Roughly $400-800 / month for the level we use.
Profound — clustering and intent analysis. Strong on prompt-pattern coverage. Roughly $500 / month.
Orion — alternative to Searchable, sometimes cheaper for smaller engagements. ~$200 / month.
Manual cross-check — every Friday, the named SEO lead pulls the top-5 prompts manually across all five LLMs and screenshots the actual returned answer. Automated tools miss 5–15% of placements (especially when the LLM cites you without naming the URL); the manual pass catches them.
The two-week baseline freeze
Before optimisation begins, we freeze the citation baseline for two weeks. Daily snapshots, screenshot archive, written agreement on the verification standard for each metric. This is the paper trail that makes the engagement honest — “this citation is bonus-eligible” requires a documented absence at T=0.
The baseline freeze is mandatory on the Performance tier because the bonus model requires it. We do it on every other tier too because clients always ask “did this citation already exist?” by month two — and the frozen baseline answers cleanly.
Weekly report shape
Every Monday morning across our portfolio:
- Visibility delta (week / week and start / week)
- Share of voice movement against the named competitor set
- Sentiment shift on the top-10 prompts
- New citations gained, citations lost
- Manual cross-check results
The report is one page. If it takes two pages, we are reporting the wrong things.
What “good” looks like at six months
For a Scale engagement on a clean B2B niche we expect:
- 25–40% visibility on tracked prompts (out of 60)
- 20%+ share of voice in the niche
- Sentiment 70+
- Average position #2 or better on the top-10 prompts
If you are not seeing those numbers by month six, either the niche is harder than we scoped or the structural rewrite did not actually ship. Both are fixable; both are worth flagging at month two, not month six.
What we report at the executive level
Three numbers, not fifteen. Visibility delta, share of voice movement, AI-driven leads attributed. The fifteen-metric dashboard is for the working call. The executive sync is for the C-level — three numbers, one chart, one decision.