The 2010s were the decade of behavioral science. Airport spinner racks held shock bestsellers by the likes of Dan Ariely and Daniel Kahneman. Governments built “nudge units” to encourage positive behavior change without regulation. Implicit bias even became a hot topic on the campaign trail.
Meanwhile in the insight business, award-winning papers promised “a world without questions”. No longer would brands have to rely on what their customers claimed to think and do. Behavioral research would give marketers models for decision making that were more insightful and predictive than old-school surveys.
Hold up, though. As the decade ends, it’s worth asking – did that behavioral research revolution actually happen?
The evidence suggests not. Every year the Greenbook GRIT survey asks research buyers and suppliers to evaluate which upcoming methods – from social media analytics to gamification – they are actually using. Five years ago, 25% of respondents claimed to be using behavioural research models. Last year, that percentage was 32%. Growth, to be sure, but not business transformation.
What has stopped behavioral research from scaling? And what barriers stand in the way of it fulfilling its potential?
To answer this question, we’re going to zoom in on in-the-moment behavioral research that’s also quantitative. It’s the area where the gap between claimed and actual behavior truly hits home.
People can sound off in surveys or social media about their attitudes to CPG brands – saying they’ll quit buying Gillette because of its controversial ‘toxic masculinity’ ad, for instance. But until that person chooses one way or another when they’re actually buying razors, those attitudes are just noise. Shopper insight is where behavioral research can and must prove its worth.
So let’s look at behavioral research through that shopper lens, and ask – what’s gone wrong? Why isn’t everybody doing this?
Like any good researcher, we come to the question with a few hypotheses. The first is that the ideas might be wrong – the basic claims of behavioral science might not stand up.
The core idea behind modern behavioral science is that human beings don’t usually make considered, conscious decisions – instead they make rapid, automatic choices based on a set of mental shortcuts (heuristics and biases). Whether you call these rapid decisions “irrational” or not, there’s general agreement they happen.
Store owners, of course, have known about them for years. Everything from cartoon animals on packs, to fresh bread smells in the bakery, to promo offers at the end of the aisle, is designed to tap into that system of mental shortcuts we all carry around. The in-store environment was a temple to behavioral science before it even had a name. This stuff works, and every retailer knows it.
On to our second hypothesis for why behavioral research doesn’t scale – the methods we’ve been using are wrong. It’s all very well knowing that people make these quick choices in-store, but if you can’t meaningfully measure them, that knowledge is pointless.
This is an area where we’ve learned one very important lesson. If you want to understand what happens in a store, you have to get into that store. Surveys and lab experiments don’t cut it.
Take consideration sets, for instance. For most marketers, getting into consumers’ consideration sets is a major goal. But the sets revealed by awareness questions in surveys turn out to be far larger than those revealed by real behavior. In our study, people reported awareness of 10 multi-vitamin brands on average. But in store, the average shopper for vitamins only noticed 3. That’s a major difference, and marketers need to ask themselves which is more useful – claimed awareness or real world attention?
Why is there this gap? Behavior is only partly governed by what goes on in our heads – it’s also a response to our environment. There is no environment more complex and more full of stimuli than the modern retailer – even an online retailer is full of offers, positionings and information, all designed to influence behavior. Add to that constraints of money and time pressure and you have a situation no lab, virtual reality setting or survey in the world could properly simulate.
So we know what methods work to get useful behavioral shopper insights – observational research which gets inside the store or site and inside the purchase decision.
This is easier said than done – which brings us to our third hypothesis. Marketers have the right ideas, and they know what methods work. Maybe what’s held behavioral research back is the tools – the ways we implement those methods.
This is the core of the problem of why behavioral research hasn’t scaled. Existing ways of getting into the purchase decision fall into two traps. First there’s receipts and purchase data, which give you the what of behavior at scale, but miss out on the why. You get vital information on choice outcomes, but no visibility on which of the myriad factors in-store or online have driven them.
Second, there’s ethnography and mobile ethnography, which get right inside the decision environment, but are expensive, hard to scale, depend on low base sizes and tend to use set tasks for participants, so they offer the why of behavior but not the what.
To fulfill the promise of behavioral science, you need tools which give you the what and the why at scale. Natural shopping occasions with no set tasks, robust sample, and capturing choices in the real environment offers brands the ability to truly understand consumer behavior at point of purchase with reliable data.
Behavioral research is the best route to understanding consumer decisions and making a brand’s presence count in the purchase environment. But it’s hit a plateau among buyers because while the ideas and methods are sound, the tools haven’t been good enough to deliver on its potential. The 2020s will see the next generation of tools and suppliers step up, delivering close-up insight at quantitative scale, and finally taking behavioral research into the commercial mainstream.
About the Author: Amishi Takalkar is a Co-founder and the architect of NAILBITER data products. She has a Masters in Marketing Research from the University of Texas at Arlington and extensive CPG research, data and analytics experience, including CPG Market Research at Pepsico, Technology at AOL and Entrepreneurship at Affinnova.