This site is part of the Informa Connect Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 3099067.

Market Research
search
Big Data

Is This the Party to Whom I Am Speaking?

Posted by on 15 February 2018
Share this article

Allow me to ask a question that’s been nagging me for months: Given the trend toward burgeoning sample sizes – to the point where some suggest a “blurring of the lines” between Qual and Quant – how can we be sure that we’re basing our research findings on respondents who legitimately fit the project’s spec?  Or, if social-media output is to be the source of the primary research data, how certain can anyone be that bots with nefarious agendas haven’t been trolling those source sites?

With the help of new and amazing text-analytics tools, we now have the capability to sort, categorize and analyze vast oceans of undifferentiated data quickly, at a relatively low cost, and with very little human involvement.  Only after that are humans are inserted into the process to analyze the output and report out against the study’s business objective(s).  But, if those humans are actually engaged in a case of, “garbage in/garbage out,” how can anyone tell?

Businesses are under mounting pressure to think and act quickly; that’s hardly “new” news.  However, has that pressure caused market researchers to lose sight of the possibility that their “good enough” data may not be as “good” as they think it is?  Are American companies (as has been suggested in the context of the 2016 elections) simply not asking the “right people”?*  Or, could they (per the backstory of the disastrous “Black Lives Matter” Kendall Jenner Pepsi Ad,^  the 1985 rollout of the “New Coke”^^, and JC Penney’s 2012 move to eliminate coupons and frequent sales from their marketing mix^^^) be relying too much on their own hunches rather than actually listening to their actual, verified customers?

All these questions make me undeniably nostalgic for the good old days when we used to sit down with one qualified individual (or a small group) at a time and listen to what they had to say about our client’s product, service, or business issue.  The people at our tables were almost always recruited from client-provided or otherwise verified lists; had passed a screener before being recruited; and had been able to successfully pass a re-screener in the waiting room prior to the interview.  Were all those levels of verification an overreaction?  Were they a waste of time?  Looking back it might seem so, since we’re now in a world where research can be acceptably conducted using data collected completely “in the wild.”

Yes, classical qualitative market research may seem slow and is sometimes called out as expensive, but it’s also highly resistant to hacking and misrepresentation.  In a world where the “truth” is sometimes regarded as situational, wouldn’t it be wise to insist on a higher, more rigorous standard for the quality (e.g., source, relevance) of the data upon which crucial business decisions will be based?

About the Author: Laurie Bredenfoerder, MBA, PRC is a veteran independent qualitative market research consultant based in Cincinnati, Ohio.  She holds a B.S. in Journalism from Bowling Green State University, an MBA from Xavier University, and a Professional Researcher Certification from the Insights Association.  Laurie is a board member of the Insights Association’s Great Lakes Chapter, a national committee chairperson for the Qualitative Research Consultants Association, a wife, and a proud mother of a wonderful son. Reach her by email: bvalley@fuse.net.

Share this article