Videoquant was born out of my frustration with the current approaches we use to develop ideas and content for TV commercials, paid video, and social video.
While directly overseeing data science teams optimizing $ hundreds of millions per year in video across 10 countries and guiding brands spending over $1 Billion per year in video, I felt like we all had top-tier jockeys racing three-legged horses. We’d spend countless hours tinkering and tweaking where our video would show and, in the case of paid video, for how much. But little data science was behind the video creative aside from some surveys and panels that yielded questionable results, at best. And that’s a shame, because video creative is the proverbial horse in the race… it’s what people are actually seeing. And as I later came to quantify, about 80% of the outcome of a video effort – whether the TV commercial will generate ROAS or a social video will receive attention – is tied to the video content.
When ideas for TV ads and social video aren’t being driven by expert speculation, the combination of surveys, panel studies, and focus groups are the triad of tools that the top brands are using today to inform video (myself previously included). I will write separate articles on what I see happening with focus groups. Today, I’d like to focus on surveys and their limited value when predicting how video concepts will perform.
Marketers often rely on surveys to gauge consumer preferences and predict the success of their video ads. Some more advance tools recently entering the market take surveys a step further and use them to measure reaction to different components of a produced video. However, using surveys to forecast the effectiveness of video ads has, as I experienced over time, has severe limitations. These limitations are generally rooted in a wide spectrum of biases that are inherent with surveying in general: social desirability bias, hypothetical bias, self-reporting errors, nonresponse bias, question order effects, acquiescence bias, and respondent fatigue.
Before going into the foundational statistical challenges with using surveys to predict how video content is likely to perform, let’s first look as some massive failures that have resulted from their use. Examples aren’t hard to come by.
The Coca-Cola Fiasco (1985)
In 1985, Coca-Cola decided to change its formula and introduced “New Coke” as a response to the growing market share of its competitor, Pepsi. The decision was based on extensive market research, including blind taste tests and surveys involving nearly 200,000 participants. The survey results indicated that consumers preferred the taste of the new formula over both the original Coca-Cola and Pepsi.
However, when New Coke was launched, it faced a massive public backlash. Loyal Coca-Cola drinkers were unhappy with the change and demanded the return of the original formula. Within just three months, Coca-Cola reintroduced the classic formula as “Coca-Cola Classic.” The New Coke fiasco demonstrated that survey data, which seemed to indicate a preference for the new formula, failed to account for consumers’ emotional attachment to the original product and brand. In this case, survey data was unable to accurately predict consumer behavior and preferences.
The Ford Edsel Debacle (1950)
Another example of survey failure that comes to mind was the introduction of the Ford Edsel. In the late 1950s, the Ford Motor Company conducted extensive market research to develop a new car that would appeal to a growing consumer segment. They used surveys and focus groups to gather information on potential features, design, and pricing. Based on the research findings, Ford introduced the Edsel in 1957 as a new, innovative, and stylish car.
Unfortunately, the Edsel turned out to be a massive commercial failure. The car’s design was considered unattractive, and its price was too high for the target audience. Additionally, the Edsel was launched during an economic recession, which further hampered its sales. Ford ended up discontinuing the Edsel in 1959, after spending millions on its development and marketing.
The iPhone Near-Miss (2007)
Surveys have also almost killed what later became major successes. Before the launch of the original iPhone in 2007, many industry experts and analysts were skeptical about its potential success. Surveys conducted at the time suggested that consumers were content with their existing mobile phones, and the majority did not see a need for a new, expensive device that combined phone and internet capabilities.
“The surveys conducted before the iPhone’s launch failed to predict the massive demand for the product, as they did not account for the innovative user experience, design, and marketing efforts”
However, when Apple introduced the iPhone, it took the world by storm. The iPhone revolutionized the smartphone industry and became one of the most successful consumer products of all time. The surveys conducted before the iPhone’s launch failed to predict the massive demand for the product, as they did not account for the innovative user experience, design, and marketing efforts that contributed to its success. This example demonstrates that surveys can sometimes underestimate consumer preferences for groundbreaking products that challenge conventional expectations.
Fifty Shades of Grey (2011)
In 2011, the erotic romance novel “Fifty Shades of Grey” by E.L. James was released. Before its publication, surveys and focus groups among readers suggested that the book’s explicit content and controversial themes might be off-putting to many potential readers. The general consensus was that the novel would likely not achieve mainstream success.
However, “Fifty Shades of Grey” went on to become a global phenomenon, selling over 125 million copies worldwide and spawning a series of sequels and a successful film franchise. The surveys and focus groups failed to predict the novel’s widespread appeal, as they underestimated readers’ curiosity and interest in exploring unconventional themes. This anecdote highlights how surveys can sometimes fail to capture the true preferences and behaviors of consumers, particularly when dealing with unconventional or controversial subjects.
Limitations of Surveys When Predicting Video?
Why is it that surveys often fail so spectacularly? Below is an abbreviated list of limitations of this technique, especially as applied to video:
- Social Desirability Bias
- Survey respondents may provide answers that they believe will make them appear favorable or conform to societal norms (Furnham, 1986).
- Hypothetical Bias
- People’s stated preferences in hypothetical contexts may not accurately reflect their real-world behavior (Harrison & List, 2004).
- Self-Reporting Errors
- Respondents may have difficulty accurately recalling their past experiences or interpreting survey questions, leading to incorrect or biased responses.
- Nonresponse Bias
- Survey participants may not be representative of the overall population, as certain groups may be more likely to respond or not respond to surveys (Groves, 2006).
- Question Order Effects
- The order in which survey questions are presented can influence respondents’ answers, potentially leading to biased results (Krosnick & Alwin, 1987).
- Acquiescence Bias
- Respondents may have a tendency to agree with survey statements, regardless of their true opinions, leading to skewed results (Couch & Keniston, 1960).
- Respondent Fatigue
- Longer surveys can cause respondents to lose interest or rush through questions, resulting in less accurate or thoughtful responses (Galesic & Bosnjak, 2009).
Surveys have their place in market research, but are not a particularly accurate method for predicting the effectiveness of video ads. By being aware of the limitations of surveys and considering alternative approaches, marketers can make more informed decisions and develop successful video advertising campaigns.
Sources
- Furnham, A. (1986). Response bias, social desirability and dissimulation. Personality and Individual Differences, 7(3), 385-400.
- Harrison, G. W., & List, J. A. (2004). Field experiments. Journal of Economic Literature, 42(4), 1009-1055.
- Groves, R. M. (2006). Nonresponse rates and nonresponse bias in household surveys. Public Opinion Quarterly, 70(5), 646-675.
- Krosnick, J. A., & Alwin, D. F. (1987). An evaluation of a cognitive theory of response-order effects in survey measurement. Public Opinion Quarterly, 51(2), 201-219.
- Couch, A., & Keniston, K. (1960). Yeasayers and naysayers: Agree
- Bradburn, N. M., Sudman, S., & Wansink, B. (2004). Asking questions: The definitive guide to questionnaire design — for market research, political polls, and social and health questionnaires. San Francisco, CA: Jossey-Bass.
- Tourangeau, R., Rips, L. J., & Rasinski, K. (2000). The psychology of survey response. Cambridge, UK: Cambridge University Press.
- Schwarz, N., & Strack, F. (1999). Reports of subjective well-being: Judgmental processes and their methodological implications. In D. Kahneman, E. Diener, & N. Schwarz (Eds.), Well-being: The foundations of hedonic psychology (pp. 61-84). New York, NY: Russell Sage Foundation.
- Podsakoff, P. M., MacKenzie, S. B., Lee, J. Y., & Podsakoff, N. P. (2003). Common method biases in behavioral research: A critical review of the literature and recommended remedies. Journal of Applied Psychology, 88(5), 879-903.
- Mullinix, K. J., Leeper, T. J., Druckman, J. N., & Freese, J. (2015). The generalizability of survey experiments. Journal of Experimental Political Science, 2(2), 109-138.
- Zaller, J. (1992). The nature and origins of mass opinion. Cambridge, UK: Cambridge University Press.
- Gosling, S. D., Vazire, S., Srivastava, S., & John, O. P. (2004). Should we trust web-based studies? A comparative analysis of six preconceptions about internet questionnaires. American Psychologist, 59(2), 93-104.