Skip to content

Measurement

You can't improve what you don't measure

Five KPIs that define your AI visibility posture. Each is measured per query, per model, and benchmarked against your competitors.

Core KPIs

What we measure

AI Share-of-Voice

Definition

The percentage of category-relevant queries where your brand appears in AI-generated responses.

How measured

Query sampling across target LLMs, appearance tracking, position scoring.

Benchmark

Baseline established in your audit; tracked over time against your category.

Citation Rate

Definition

How frequently LLMs attribute information to your official sources.

How measured

Source analysis in responses, link tracking, attribution patterns.

Benchmark

Baseline established in your audit; target increases as citation-ready sources improve.

Accuracy Score

Definition

Correctness of information presented about your brand in AI responses.

How measured

Fact verification against authoritative sources, error categorization.

Benchmark

Baseline set in audit; improvements tracked by query and model.

Hallucination Flags

Definition

Instances where LLMs present false or fabricated information about your brand.

How measured

Systematic verification, severity classification, tracking over time.

Benchmark

Baseline set in audit; critical errors prioritized for remediation.

Competitive Displacement

Definition

Queries where competitors appear but you don't, or vice versa.

How measured

Head-to-head comparison, position tracking, trend analysis.

Benchmark

Baseline established in your audit; tracked by query and by competitor.

Establish your baseline across all five KPIs

The €500 audit scores your brand on every metric above — per query, per model, with evidence.

Get your baseline