When I think about the biggest challenges and greatest opportunities in market research, it always comes down to the quality and availability of data. Oftentimes what isn’t discussed is that only a small portion of the research data we collect can be used as-is, nearly all of the data in market research must be transformed before we can even start extracting value from it. Data is messy, unruly, siloed, and often not easily accessible at the scale of volume desired.
Video data is the gold standard
In data science, a significant amount of time and energy is spent on what’s called data wrangling. Data wrangling is one of many steps in the process for data scientists to unlock true value from the datasets. Market researchers have a similar, less intense and efficient, data wrangling process for video analysis. In the qualitative research world, we amass large volumes of video data and manually re-watch it, applying heuristic rules based on the job experience and known business objectives to find key moments.
Video data is our most valuable asset. Video has become commoditized for good reason, it fires on many sensory modalities simultaneously making it a highly effective storytelling and mnemonic device. It’s easier and cheaper to collect focus group or IDI videos than ever before, the net outcome of this commoditization….market researchers are swimming in video data.
Just put it in the data lake
On average qualitative researchers collect 18 hours of video footage per project. That’s the equivalent to a season of your favorite TV series, and many are doing this multiple times a year. Despite the groundswell of adoption for video, we as an industry surprisingly have not advanced our video analysis process in a way that allows us to make sense of all the video we collect. The process for video analysis today is manual, time-consuming, and costly. Simply put, it is not sustainable.
1:2 ratio: For every hour of research video we have it takes a human 2 hours to analyze.
$300-800/hr: The cost of analyzing that video is upwards of $800 an hour!
This 1:2 ratio was the same when I was a research analyst at Avon over 10 years ago! Back then my job was simple: take ethnographic video data and find the most relevant video moments – the highlights. I learned a new software program, called Avid, and over time I became quite good and identifying and building stories with the ethnographic data. Yet, my video editing wizardry wasn’t enough to keep pace with the new research data coming in. I had reached my limit and needed a way to augment my video analysis process.
Fast forward 10 years and the qualitative researchers I speak with are still stuck in the same manual, costly, and time-consuming process for video analysis.
Connecting disciplines and experience
My video analysis experience at Avon led to me spearhead FocusVision’s first machine learning project aimed at automatically identifying key moments in video. Over the last year, my team has developed a new technology that reduces the time and cost of video analysis by 80%. We have built a deep learning model that reads video transcripts and automatically identifies where a moment is likely to provide insights.
Our text-based model necessitated a deep dive into the language of qualitative research. The data wrangling step took months to reach a point where we could start asking questions of the data, to start testing hypothesis. We analyzed thousands of anonymized transcripts with one goal: quantifying highlights in research video.
We’ve learned a lot over the last year, I consider myself extremely fortunate to get a glimpse at the DNA of an industry. Perhaps one of the most humbling parts of the experience has been discovering that our perceived differences across industries and across methodologies are far fewer than one would think. When it comes to qualitative research, we have far more in common than we have differences.
Stay tuned for Part Two where I’ll share some of the most surprising learnings over the past year about the qualitative industry at large and the data science learnings gathered while building an automated video highlight extraction tool.
About the Author: Mike Kuehne is the Head of Innovation and Data Science group at FocusVision and is responsible for the research and development program. He supervises a team of data scientists who focus on applied machine learning solutions that address the most common market research pain points today. Mike led the invention of the Highlight Model, a patent pending deep learning model that automatically identifies key moments in video.
Prior to FocusVision Mike built, led and participated in R&D programs for the biotech and consumer goods industries. He led the product research group for an advanced biotechnology company specializing in novel application for taste modulation. He also employed ground-breaking research approaches to guide innovations through the new product development process for a large CPG company.