Blog
Methodology11 min read

7 Dimensions of Creator Quality: What Brands Actually Check

Understand the 7 AI scoring agents that evaluate creator quality — content risk, authenticity, brand safety, audience quality, sentiment, community trust, and ROI prediction.

Published March 18, 2026 · Updated March 19, 2026

When brands evaluate influencers, they need more than a single number. CreatorScore’s 7 AI scoring agents each evaluate a distinct dimension of creator quality, producing a transparent, explainable score from 1-100. Here’s what each agent measures, why it matters, and how it affects the final score.

1. Content Risk Agent (20% Weight)

The Content Risk Agent is the most heavily weighted agent, reflecting the reality that a single piece of harmful content can destroy a brand partnership overnight.

What It Measures

This agent uses a 5-component model analyzing every piece of creator content:

  • Hate Speech Detection (30%) — NLP analysis of text, captions, and video transcripts for discriminatory language, slurs, and coded hate speech.
  • NSFW Content (25%) — Visual analysis of images and video frames for nudity, sexual content, and graphic violence.
  • Severity Assessment (20%) — How severe are the flagged issues? A single borderline joke is different from a pattern of harmful content.
  • Visual Analysis (15%) — AI vision models scan thumbnails, images, and video frames for visual content risks that text analysis would miss.
  • Profanity Analysis (10%) — Frequency and severity of profanity usage across all content.

Knockout Triggers

Hate speech scores above 90% cap the overall CreatorScore at 35. NSFW scores above 95% cap at 35. These are non-negotiable safety thresholds.

Data Sufficiency

Creators with 0 analyzed posts are capped at 50/100 (insufficient data), fewer than 5 posts cap at 70, and fewer than 10 cap at 85. This prevents new or low-content creators from receiving artificially high scores.

2. Authenticity Agent (20% Weight)

The second most important dimension. Authenticity determines whether a creator’s audience and engagement are real.

What It Measures

  • Follower authenticity — Bot detection on follower accounts, analyzing account age, activity patterns, and profile completeness.
  • Engagement authenticity — Bot scoring on individual comments, identifying generic/template responses from automated accounts.
  • Growth pattern analysis — ML models trained on organic growth curves flag purchased follower spikes.
  • Engagement pod detection — Network analysis identifying coordinated engagement clusters.
  • Like-comment anomaly — Statistical analysis of the relationship between likes and comments to detect purchased engagement.

Knockout Triggers

Bot rate above 60% caps score at 20/100. Engagement pod rate above 80% caps at 30/100. Read more about detecting fake followers.

3. Brand Safety Agent (15% Weight)

Beyond content, this agent evaluates the creator’s broader reputation and partnership track record.

What It Measures

  • FTC Disclosure Compliance (35%) — How consistently does the creator disclose sponsored partnerships?
  • Controversy Score (35%) — Web reputation analysis for past controversies, cancellations, and brand safety incidents.
  • Brand Pattern Analysis (30%) — Brand diversity, single-brand dominance, and partnership history quality.
  • Web Reputation (+15% if available) — External signals from news, forums, and review sites.

4. Audience Quality Agent (15% Weight)

Not all audiences are created equal. This agent evaluates whether a creator’s followers are the right match for brand campaigns.

What It Measures

  • Community Health (50%) — Comment section quality, toxicity levels, spam rates, and overall discourse health.
  • Engagement Quality (50%) — Average comment length, question frequency, and conversation depth as indicators of genuine audience investment.

An audience that writes thoughtful comments and asks genuine questions is exponentially more valuable for brand campaigns than one that leaves emoji-only responses.

5. Sentiment Agent (10% Weight)

How does the public feel about this creator? Sentiment analysis provides a pulse check on audience reception.

What It Measures

  • Sentiment Stability (50%) — How consistent is audience sentiment over time? Creators with volatile sentiment (love one week, hate the next) are higher risk.
  • Audience Sentiment (50%) — The overall positive/negative/neutral distribution of comments and public reception.

CreatorScore uses Claude AI (Anthropic) to reclassify borderline comments that automated NLP models get wrong, improving accuracy on sarcasm, cultural context, and nuanced language.

6. Community Trust Agent (10% Weight)

Trust is built over time through consistent behavior. This agent evaluates the creator’s conduct and compliance track record.

What It Measures

  • FTC Disclosure Compliance (50%) — Disclosure rate across all detected brand mentions and sponsorships.
  • Creator Conduct (50%) — How the creator interacts with their community—responsiveness, tone, conflict handling, and professional behavior.

Knockout Trigger

Disclosure compliance below 10% (with verified brand ad data) caps the overall score at 35/100.

7. ROI Prediction Agent (10% Weight)

The only forward-looking agent. While other agents evaluate historical data, the ROI Prediction Agent projects future campaign performance.

What It Measures

  • Engagement Quality (40%) — Weighted engagement rates that account for platform, creator tier, and content type.
  • Community Health (25%) — A healthy, engaged community converts better than a passive one.
  • Growth Trajectory (35%) — Blended velocity across 30-day (60% weight), 60-day (25%), and 90-day (15%) windows. Growing creators offer increasing ROI over time.

This is a unique differentiator—no other scoring platform includes predictive ROI modeling as part of the core scoring system.

How the 7 Agents Combine Into a CreatorScore

Each agent normalizes its raw signals to a 0-100 scale, then the weighted average produces the final CreatorScore:

CreatorScore = (Content Risk × 0.20) + (Authenticity × 0.20) + (Brand Safety × 0.15) + (Audience Quality × 0.15) + (Sentiment × 0.10) + (Community Trust × 0.10) + (ROI Prediction × 0.10)

After the weighted average, knockout factors are applied. If any knockout threshold is breached, the score is capped at the knockout level regardless of the weighted average.

Score Tiers

  • 90-100: Exceptional — Top-tier brand-safe creator. Minimal risk across all dimensions.
  • 80-89: Excellent — Highly recommended for brand partnerships.
  • 70-79: Good — Suitable for most brand campaigns with minor areas to review.
  • 60-69: Fair — Some risk factors present. Review score breakdown before proceeding.
  • Below 60: Poor — Significant brand safety concerns. Not recommended without thorough review.

Every score comes with SHAP explainability—transparent drivers showing exactly which factors pushed the score up or down. No black boxes.

For the full technical methodology, see our Scoring Methodology page.

Vet influencers in minutes, not days

CreatorScore's 7 AI agents evaluate content risk, authenticity, brand safety, and more across 12 platforms. Get a 1-100 trust score for any creator.