Validated Results
Case Studies

Real results from brands using Videoquant to predict creative performance before spend. These results are not cherry-picked and are representative of what we have seen with our clients.

Case Study #1
Publically Traded Super Bowl Advertiser: 3.6x TV Acquisitions
3.6x
More Acquisitions
Key Results
  • Controlled test confirms Videoquant cut TV CPA over 70%
  • Statistically significant results (p < 0.05)
  • Same media spend and flighting — 3.6x more acquisitions
  • Equivalent to $780K+ in INCREMENTAL acquisition value on a single $300K test flight
Test Results
ControlScore: 32
Spend$300K+
AcquisitionsBaseline
Videoquant-Informed CreativeScore: 74
Spend$300K+
Acquisitions3.6x
✓ Statistically Significant Winner

The Challenge: A major Super Bowl advertiser was struggling with TV ad performance and needed to improve acquisition efficiency without increasing media spend.

The Solution: Videoquant was used to test multiple TV ad variants before airing. The creative with the highest Videoquant score was selected and tested against the control in a controlled environment.

The Result: The Videoquant-informed creative drove 3.6x more acquisitions using the same media spend and flighting schedule, cutting TV CPA by over 70%. On a single $300K+ test flight, the brand received the equivalent of $1.08M+ in acquisition value — a $780K+ efficiency gain from selecting the right creative before spend.

This success led to a multi-year renewal.

Case Study #2
A Second Publicly Traded Super Bowl Advertiser ($60B MCap): 4.4x CTV Brand Lift
$2B/year in marketing spend
4.4x
Higher Brand Lift
Key Results
  • In blind testing conducted by client across 22 TV ads, Videoquant high predictions outperformed by 4.4x brand lift (actual)
  • Estimated $240K savings per $1M media spend
  • Statistically significant results (p < 0.05)
Actual Downstream Lift
Videoquant High Predictions
0.115
Videoquant Low Predictions
0.025
4.4x
Higher Downstream Performance
p < 0.05

The Challenge: A Fortune 500 company with $2B in annual marketing spend needed to optimize their TV advertising strategy across 22 different ad creatives.

The Solution: The client conducted extensive blind testing of 22 TV ads using Videoquant predictions. Ads were categorized as "high prediction" or "low prediction" based on Videoquant scores.

The Result: Videoquant high predictions delivered 4.4x more brand lift than low predictions on the same media spend, with statistically significant results. This translated to an estimated $240K in savings per $1M in media spend.

Case Study #3
TV Rank-Order Prediction: 0.64 Spearman Correlation
20 TV Ads · p = 0.002
0.64
Spearman ρ
Key Results
  • Videoquant scores predicted the rank order of 20 TV ads against the client's own internal performance expectations
  • Spearman correlation of 0.64 (p = 0.002) — highly statistically significant
  • Not just picking a winner — predicting the full ranking from best to worst
  • A different kind of proof: the model understands relative creative quality, not just binary outcomes
Statistical Summary
Ads Tested20
ChannelTV
Spearman ρ0.64
p-value0.002
Full rank-order prediction, not just binary win/loss

Why This Matters: Most ad testing claims are about picking a winner from two options. This is different — Videoquant predicted the rank order of 20 ads against the client's own internal expectations. A Spearman correlation of 0.64 at p = 0.002 means the scoring model reliably distinguishes not just which ad is best, but the full gradient from strongest to weakest.

For Statistical Thinkers: A 0.64 Spearman ρ across 20 items means the model captures meaningful signal about relative creative quality. This isn't a coin flip on binary outcomes — it's a rank-order prediction across a real portfolio of TV creative, validated against the client's own ground truth.

Case Study #4
UK Performance TV Agency: 71.4% Win Rate on Live Campaigns
71.4%
Head-to-Head Win Rate
Key Results
  • Videoquant predictions correctly called 15 of 21 head-to-head outcomes on live TV campaigns (p = 0.039)
  • Consistent across ad durations: 66.7% win rate on 10s spots, 75.0% on 30s spots
  • Results match the 60–70% prediction accuracy pattern validated repeatedly with US advertisers
Test Results
Duration
Wins
Losses
Ties
Win Rate
10s
6
3
1
66.7%
30s
9
3
3
75.0%
Combined
15
6
4
71.4%
Binomial test: p = 0.039 (statistically significant vs. coin flip)

The Challenge: One of the UK's most prestigious performance TV media agencies needed to predict which ad creatives would drive stronger visit response rates before committing client media budgets.

The Solution: The agency used Videoquant to score TV creatives across multiple client campaigns spanning 10-second and 30-second ad formats. Predictions were then compared head-to-head against actual visit response rates from live media.

The Result: Videoquant correctly predicted the higher-performing creative in 71.4% of head-to-head matchups (p = 0.039), with a positive linear relationship between VQ scores and visit response rates after controlling for duration. This is the first validation of Videoquant's predictive accuracy on UK campaigns — and the results match the 60–70% win rate pattern seen consistently across US advertisers, confirming that the model generalizes across markets.

Case Study #5
Paid SEM: $3–4M Impact
B2C Service Company
$3–4M
Total Impact
Key Results
  • $2.5–3M incremental impact from shifting spend to top 20% Videoquant scores
  • ~$350–400K saved by reducing bottom 20% of ads
  • ~40% CTR lift on top 20% vs. rest (~19% vs. ~14%)
  • Conservative assumptions: no time savings, no new creative factored in
Performance Metrics
Top-K Lift (Top 20% vs Rest)
Top 20% Mean CTR~19%
Rest Mean CTR~14%
~40% lift

Note: This is an underestimate since it only uses existing ads. Results show Videoquant scores don't just correlate — they materially improve ROI when used for optimization.

The Challenge: A B2C service company needed to optimize their paid SEM performance across hundreds of ad creatives to improve ROI.

The Solution: The company used Videoquant to score their SEM ads and identified the top 20% and bottom 20% performers. They shifted spend to high-scoring ads and reduced spend on low-scoring ones.

The Result: Focusing on top 20% ads generated $2.5–3M in incremental value, while cutting bottom 20% saved $350–400K. Combined impact of $3–4M with conservative assumptions — demonstrating that Videoquant scores materially improve ROI when used for optimization.

Case Study #6
Influencer's Video Channel: #1 Video for Over 1 Year
1.4M Subscriber YouTube Channel
#1
Most-Viewed Video
Key Results
  • Videoquant green-lit a concept; the channel built it
  • Resulting video became #1 most-viewed video for over a year across 360 videos
  • Outperformed all other content on a 1.4 million subscriber YouTube channel
  • Demonstrates Videoquant's ability to predict winning concepts before production
Channel Overview
Subscribers1.4M
Total Videos360
PlatformYouTube
Performance#1 Video
#1 most-viewed for over 1 year

The Challenge: A YouTube creator with 1.4 million subscribers needed to identify which video concept would resonate most with their audience before investing time and resources into production.

The Solution: The creator used Videoquant to evaluate video concepts. Videoquant identified a high-performing concept, which the channel then produced and published.

The Result: The video based on Videoquant's concept became the #1 most-viewed video on the channel for over a year, outperforming all 360 other videos. This demonstrates Videoquant's ability to predict winning concepts before production, helping creators maximize their content investment.

Case Study #7
Cross-Channel Validation: 13 of 16 Predictions Correct
App Store + Paid Social
81%
Win Rate Across Channels
Key Results
  • App Store: Correctly predicted the directional winner in all 6 controlled tests on Apple App Store carousel ads
  • Paid Social: Correctly predicted 7 out of 10 head-to-head matchups on ad copy and creative
  • Combined: 13 of 16 predictions correct (81%) across two distinct channels and formats
  • Clients independently confirmed ratings consistently align with actual performance
Results by Channel
Channel
Tests
Correct
Win Rate
App Store
6
6
100%
Paid Social
10
7
70%
Combined
16
13
81%
Client Testimonial

"The VQ scores correctly predicted directionality in all three experiments: higher VQ consistently aligned with higher conversion rates. We can now use VQ more confidently as a signal-accurate predictor for static assets like App Store screenshots and Apple Ads."

Why This Matters: Individual channel results tell part of the story. Combined, they show the model generalizes: 6 for 6 on static App Store creatives and 7 for 10 on paid social ads — across different formats, audiences, and success metrics. That's 13 of 16 predictions correct (81%) without any channel-specific tuning.

App Store: A brand tested 6 Apple App Store carousel ad options using Videoquant. The model correctly predicted the directional winner in all 6 controlled tests, allowing the brand to confidently launch with the highest-scoring creatives.

Paid Social: A separate team used Videoquant to compare ad copy variants head-to-head, rate video and static assets, and benchmark against competitors. Across 10 paid social matchups, Videoquant correctly predicted 7 winners. The client independently confirmed that ratings consistently align with which assets perform strongest in market.

💡
Did You Know?

Videoquant's founder has one of the most viral posts ever on LinkedIn — over 5 million views. He used Videoquant to make it… with no LinkedIn data in the model.

The same prediction engine that optimizes Super Bowl ads works on any content format.

Before media dollars are committed
Ready to see similar results?

We'll score your ads and show you what our model would have predicted. No data needed. No cost.

Request a Proof of Concept

Protected by U.S. Patent No. 12,020,279

videoquant.ai
Predict Creative Performance