Real results from brands using Videoquant to predict creative performance before spend. These results are not cherry-picked and are representative of what we have seen with our clients.
The Challenge: A major Super Bowl advertiser was struggling with TV ad performance and needed to improve acquisition efficiency without increasing media spend.
The Solution: Videoquant was used to test multiple TV ad variants before airing. The creative with the highest Videoquant score was selected and tested against the control in a controlled environment.
The Result: The Videoquant-informed creative drove 3.6x more acquisitions using the same media spend and flighting schedule, cutting TV CPA by over 70%. On a single $300K+ test flight, the brand received the equivalent of $1.08M+ in acquisition value — a $780K+ efficiency gain from selecting the right creative before spend.
This success led to a multi-year renewal.The Challenge: A Fortune 500 company with $2B in annual marketing spend needed to optimize their TV advertising strategy across 22 different ad creatives.
The Solution: The client conducted extensive blind testing of 22 TV ads using Videoquant predictions. Ads were categorized as "high prediction" or "low prediction" based on Videoquant scores.
The Result: Videoquant high predictions delivered 4.4x more brand lift than low predictions on the same media spend, with statistically significant results. This translated to an estimated $240K in savings per $1M in media spend.
Why This Matters: Most ad testing claims are about picking a winner from two options. This is different — Videoquant predicted the rank order of 20 ads against the client's own internal expectations. A Spearman correlation of 0.64 at p = 0.002 means the scoring model reliably distinguishes not just which ad is best, but the full gradient from strongest to weakest.
For Statistical Thinkers: A 0.64 Spearman ρ across 20 items means the model captures meaningful signal about relative creative quality. This isn't a coin flip on binary outcomes — it's a rank-order prediction across a real portfolio of TV creative, validated against the client's own ground truth.
The Challenge: One of the UK's most prestigious performance TV media agencies needed to predict which ad creatives would drive stronger visit response rates before committing client media budgets.
The Solution: The agency used Videoquant to score TV creatives across multiple client campaigns spanning 10-second and 30-second ad formats. Predictions were then compared head-to-head against actual visit response rates from live media.
The Result: Videoquant correctly predicted the higher-performing creative in 71.4% of head-to-head matchups (p = 0.039), with a positive linear relationship between VQ scores and visit response rates after controlling for duration. This is the first validation of Videoquant's predictive accuracy on UK campaigns — and the results match the 60–70% win rate pattern seen consistently across US advertisers, confirming that the model generalizes across markets.
Note: This is an underestimate since it only uses existing ads. Results show Videoquant scores don't just correlate — they materially improve ROI when used for optimization.
The Challenge: A B2C service company needed to optimize their paid SEM performance across hundreds of ad creatives to improve ROI.
The Solution: The company used Videoquant to score their SEM ads and identified the top 20% and bottom 20% performers. They shifted spend to high-scoring ads and reduced spend on low-scoring ones.
The Result: Focusing on top 20% ads generated $2.5–3M in incremental value, while cutting bottom 20% saved $350–400K. Combined impact of $3–4M with conservative assumptions — demonstrating that Videoquant scores materially improve ROI when used for optimization.
The Challenge: A YouTube creator with 1.4 million subscribers needed to identify which video concept would resonate most with their audience before investing time and resources into production.
The Solution: The creator used Videoquant to evaluate video concepts. Videoquant identified a high-performing concept, which the channel then produced and published.
The Result: The video based on Videoquant's concept became the #1 most-viewed video on the channel for over a year, outperforming all 360 other videos. This demonstrates Videoquant's ability to predict winning concepts before production, helping creators maximize their content investment.
"The VQ scores correctly predicted directionality in all three experiments: higher VQ consistently aligned with higher conversion rates. We can now use VQ more confidently as a signal-accurate predictor for static assets like App Store screenshots and Apple Ads."
Why This Matters: Individual channel results tell part of the story. Combined, they show the model generalizes: 6 for 6 on static App Store creatives and 7 for 10 on paid social ads — across different formats, audiences, and success metrics. That's 13 of 16 predictions correct (81%) without any channel-specific tuning.
App Store: A brand tested 6 Apple App Store carousel ad options using Videoquant. The model correctly predicted the directional winner in all 6 controlled tests, allowing the brand to confidently launch with the highest-scoring creatives.
Paid Social: A separate team used Videoquant to compare ad copy variants head-to-head, rate video and static assets, and benchmark against competitors. Across 10 paid social matchups, Videoquant correctly predicted 7 winners. The client independently confirmed that ratings consistently align with which assets perform strongest in market.
We'll score your ads and show you what our model would have predicted. No data needed. No cost.
Request a Proof of ConceptProtected by U.S. Patent No. 12,020,279