# Brand-grounded evaluation

Published: 2026-04-25 | Source: https://geoready.co/news/brand-grounded-evaluation

The evaluator now knows what your site is actually about - and penalises models that hallucinate a wrong description, even when your brand name appears.

The classic way to score AI visibility has a blind spot: if the model mentions your brand, it counts as a hit. But what if the brand is mentioned in a completely wrong context?

Example: a restaurant chain gets named in an answer about software consultants. The brand is there - but that's actually worse than not being mentioned, because the model learns (and teaches other users) that the association is correct.

## What we changed

When the audit starts, we now derive a short, brand-grounded description straight from your own site:

1. We fetch your homepage and the /about page if present.
2. A small model distills: "What does this brand actually sell or do?"
3. That description becomes the ground truth when we evaluate each AI response.

## What it means for your score

- If a model **mentions your brand correctly** in the right context: full points.
- If a model **mentions your brand incorrectly** (wrong industry, wrong product): points deducted - even though the name technically appears.
- If a model **says "I don't know"** instead of guessing: no penalty for honesty (unlike the old scoring which required a hit).

## Why it's more honest

The old method rewarded raw hit volume regardless of quality. The new one rewards precision - which is what you actually need. One correct mention in a relevant prompt is worth more than five wrong mentions in irrelevant ones.

## If your score has shifted

If your score dropped after this change: check the `Top recommendations` on your latest audit. They'll typically point at prompts where the brand was misidentified - and what you can do on your own site to correct it (usually clearer positioning on the homepage, more precise schema.org markup).
