Metric Library
Understand every metric in Parse. Learn how we measure AI visibility, what each score means, and what good performance looks like.
Rating scale
Score and rate metrics are graded on a five-tier scale. This scale applies to all 0-100 scores and percentage rates across Parse.
Brand Metrics
Metrics that measure a brand's visibility and performance in AI responses.
Parse Score
Overall AI visibility
Overall AI visibility score combining how well you compete, how often you're mentioned, and how often you're cited.
More detail
A single score that summarizes your brand's overall presence and credibility in AI responses.
How it's calculated
Metric type: Score (0-100)
Strength
How well you compete
How strongly your brand performs against competitors in relevant prompts.
More detail
How strongly your brand performs against competitors in the prompts where it appears.
How it's calculated
Metric type: Score (0-100)
Reach
How broadly AI mentions you
How often your brand appears across AI responses.
How it's calculated
Metric type: Score (0-100)
Model visibility
How often the brand appears in ChatGPT and Google AI Overview responses.
More detail
Combined score showing the brand's presence across major AI platforms. Calculated from visibility rates in both ChatGPT and Google AI Overview.
How it's calculated
Metric type: Score (0-100)
Peer visibility
Competitive standing versus tracked peers in your category.
More detail
Shows how the brand's visibility compares to direct competitors and similar brands in the same industry. This highlights competitive position in AI responses.
How it's calculated
Metric type: Score (0-100)
Mention rate
Percentage of prompts where this brand is mentioned among peer brands.
More detail
The percentage of relevant queries where the brand appears compared to peer brands. A higher mention rate indicates stronger presence in the category.
How it's calculated
Metric type: Rate (percentage)
Average position
Average ranking position when the brand appears in AI responses.
More detail
When the brand is mentioned, this shows where it typically appears in the response (1st, 2nd, 3rd, etc.). Lower numbers mean more prominent placement.
How it's calculated
Metric type: Rank (lower is better)
Parse rank
Ranking position among all brands in the index.
More detail
Where the brand sits compared to every other brand that we track. A lower rank means it appears more frequently across the full index.
How it's calculated
Metric type: Rank (lower is better)
Mentions
Number of AI responses that include this brand during the selected period.
More detail
Raw count of how often the brand shows up in AI answers. Track this alongside visibility and rank metrics to spot meaningful volume changes.
How it's calculated
Metric type: Count
Citation Metrics
Metrics about how AI models cite and reference sources.
Trust
How strongly AI relies on this domain
0-100 Trust score weighted by citation volume, prompt breadth, and model consensus (7d).
More detail
Composite score for how strongly AI models rely on this domain. Trust blends citation volume (50%), prompt breadth (30%), and model consensus (20%) over the last 7 days.
How it's calculated
Metric type: Score (0-100)
Citations
Number of times AI models cite the domain.
More detail
Counts how often AI responses attribute information to the domain. More citations signal that models trust and rely on the content.
How it's calculated
Metric type: Count
Gap impact
Estimated impact of closing this citation gap.
More detail
Estimated authority upside from closing this citation gap (High, Medium, or Low).
Pages cited
Total number of pages from this domain referenced by AI models.
More detail
Measures how many individual pages from the domain were cited over the selected period. Useful for spotting domains that provide deep coverage on relevant topics.
How it's calculated
Metric type: Count
Unique URLs
Number of unique URLs from the domain cited by AI models.
More detail
The breadth of pages from the domain that AI models reference. A higher count shows that multiple pieces of the content are cited, not just a single flagship page.
How it's calculated
Metric type: Count
Prompts
Unique prompts that cite or reference this domain.
More detail
Counts how many distinct prompts include this citation across AI responses. Higher values mean the domain appears across a broader set of user intents.
How it's calculated
Metric type: Count
Last cited
Most recent time AI models referenced this domain.
More detail
Shows how fresh the citation is. Recent timestamps signal that the domain is still being referenced frequently.
How it's calculated
Metric type: Date
Prompt Metrics
Metrics about brand performance within specific AI prompts.
Prompt visibility
Composite score based on mention rate and position ranking.
More detail
Overall visibility for this prompt, based on how often your brand appears and how prominently it is shown.
How it's calculated
Metric type: Score (0-100)
Total brands
Number of unique brands mentioned in responses to this prompt.
More detail
The diversity of brands that models mention for this prompt. A higher number means answers are pulling from a broader competitive set.
How it's calculated
Metric type: Count
Brand mentions
How many times this brand is mentioned in responses to the prompt.
More detail
Total mention count for the brand when people ask this prompt. Pair it with visibility to understand both frequency and share of voice.
How it's calculated
Metric type: Count
Mentioned
Whether the focus brand appears in an individual response.
More detail
Shows if the selected brand is referenced in a specific AI response so you can quickly zero in on answers that include you.
How it's calculated
Metric type: Boolean
Change
Change in metric value over the selected time period.
More detail
Represents the delta between the current value and the prior comparable period. Positive change shows improvement, negative values indicate decline.
How it's calculated
Metric type: Delta (signed number)