Reference

Metric Library

Understand every metric in Parse. Learn how we measure AI visibility, what each score means, and what good performance looks like.

Rating scale

Score and rate metrics are graded on a five-tier scale. This scale applies to all 0-100 scores and percentage rates across Parse.

Excellent60-100Top 10%
Great45-59Top 25%
Average30-44Top 50%
Poor15-29Below average
Bad0-14Low visibility

Brand Metrics

Metrics that measure a brand's visibility and performance in AI responses.

Parse Score

Overall AI visibility

Brand metric

Overall AI visibility score combining how well you compete, how often you're mentioned, and how often you're cited.

More detail

A single score that summarizes your brand's overall presence and credibility in AI responses.

How it's calculated

Metric type: Score (0-100)

Strength

How well you compete

Brand metric

How strongly your brand performs against competitors in relevant prompts.

More detail

How strongly your brand performs against competitors in the prompts where it appears.

How it's calculated

Metric type: Score (0-100)

Reach

How broadly AI mentions you

Brand metric

How often your brand appears across AI responses.

How it's calculated

Metric type: Score (0-100)

Authority

How often AI cites you

Brand metric

How often AI cites your domain as a trusted domain.

More detail

How often AI models cite your domain as a trusted domain across responses.

How it's calculated

Metric type: Score (0-100)

Model visibility

Brand metric

How often the brand appears in ChatGPT and Google AI Overview responses.

More detail

Combined score showing the brand's presence across major AI platforms. Calculated from visibility rates in both ChatGPT and Google AI Overview.

How it's calculated

Metric type: Score (0-100)

Peer visibility

Brand metric

Competitive standing versus tracked peers in your category.

More detail

Shows how the brand's visibility compares to direct competitors and similar brands in the same industry. This highlights competitive position in AI responses.

How it's calculated

Metric type: Score (0-100)

Mention rate

Brand metric

Percentage of prompts where this brand is mentioned among peer brands.

More detail

The percentage of relevant queries where the brand appears compared to peer brands. A higher mention rate indicates stronger presence in the category.

How it's calculated

Metric type: Rate (percentage)

Share of voice

Brand metric

Percentage of AI mentions this brand captures within a niche.

More detail

Measures the brand's share of total mention volume within a competitive niche. A brand with 15% share of voice appears in roughly 15% of relevant AI responses in that category. Higher values indicate category dominance.

How it's calculated

Metric type: Rate (percentage)

Average position

Brand metric

Average ranking position when the brand appears in AI responses.

More detail

When the brand is mentioned, this shows where it typically appears in the response (1st, 2nd, 3rd, etc.). Lower numbers mean more prominent placement.

How it's calculated

Metric type: Rank (lower is better)

Parse rank

Brand metric

Ranking position among all brands in the index.

More detail

Where the brand sits compared to every other brand that we track. A lower rank means it appears more frequently across the full index.

How it's calculated

Metric type: Rank (lower is better)

Mentions

Brand metric

Number of AI responses that include this brand during the selected period.

More detail

Raw count of how often the brand shows up in AI answers. Track this alongside visibility and rank metrics to spot meaningful volume changes.

How it's calculated

Metric type: Count

Citation Metrics

Metrics about how AI models cite and reference sources.

Trust

How strongly AI relies on this domain

Citation metric

0-100 Trust score weighted by citation volume, prompt breadth, and model consensus (7d).

More detail

Composite score for how strongly AI models rely on this domain. Trust blends citation volume (50%), prompt breadth (30%), and model consensus (20%) over the last 7 days.

How it's calculated

Metric type: Score (0-100)

Citations

Citation metric

Number of times AI models cite the domain.

More detail

Counts how often AI responses attribute information to the domain. More citations signal that models trust and rely on the content.

How it's calculated

Metric type: Count

Gap impact

Citation metric

Estimated impact of closing this citation gap.

More detail

Estimated authority upside from closing this citation gap (High, Medium, or Low).

Citation share

Citation metric

Share of total citations that reference this domain.

More detail

Shows what percentage of all citations during the selected period included this domain. Higher shares mean the domain is heavily relied on compared to others.

How it's calculated

Metric type: Rate (percentage)

Citation share change

Citation metric

Change in citation share versus the prior period, in percentage points.

More detail

How much this domain's share of AI citations changed versus the prior period (positive means up, negative means down).

How it's calculated

Metric type: Delta (signed number)

Pages cited

Citation metric

Total number of pages from this domain referenced by AI models.

More detail

Measures how many individual pages from the domain were cited over the selected period. Useful for spotting domains that provide deep coverage on relevant topics.

How it's calculated

Metric type: Count

Unique URLs

Citation metric

Number of unique URLs from the domain cited by AI models.

More detail

The breadth of pages from the domain that AI models reference. A higher count shows that multiple pieces of the content are cited, not just a single flagship page.

How it's calculated

Metric type: Count

Prompts

Citation metric

Unique prompts that cite or reference this domain.

More detail

Counts how many distinct prompts include this citation across AI responses. Higher values mean the domain appears across a broader set of user intents.

How it's calculated

Metric type: Count

Last cited

General metric

Most recent time AI models referenced this domain.

More detail

Shows how fresh the citation is. Recent timestamps signal that the domain is still being referenced frequently.

How it's calculated

Metric type: Date

Prompt Metrics

Metrics about brand performance within specific AI prompts.

Prompt visibility

Prompt metric

Composite score based on mention rate and position ranking.

More detail

Overall visibility for this prompt, based on how often your brand appears and how prominently it is shown.

How it's calculated

Metric type: Score (0-100)

Total brands

Prompt metric

Number of unique brands mentioned in responses to this prompt.

More detail

The diversity of brands that models mention for this prompt. A higher number means answers are pulling from a broader competitive set.

How it's calculated

Metric type: Count

Brand mentions

Prompt metric

How many times this brand is mentioned in responses to the prompt.

More detail

Total mention count for the brand when people ask this prompt. Pair it with visibility to understand both frequency and share of voice.

How it's calculated

Metric type: Count

Mentioned

Prompt metric

Whether the focus brand appears in an individual response.

More detail

Shows if the selected brand is referenced in a specific AI response so you can quickly zero in on answers that include you.

How it's calculated

Metric type: Boolean

Change

General metric

Change in metric value over the selected time period.

More detail

Represents the delta between the current value and the prior comparable period. Positive change shows improvement, negative values indicate decline.

How it's calculated

Metric type: Delta (signed number)