Interrogating Data to Find Trends with Malcolm Gladwell

By: Tom Ewing
Whether Malcolm Gladwell realises it or not, it’s hard to think of a more influential figure for the 21st century insight business. At conferences over the last fifteen years, the number of presentations enthusiastically embracing ideas from “The Tipping Point” or “Blink” has been matched only by the number angrily attempting to debunk them. So, the full house for his keynote on ‘Interrogating Data’ was no surprise.
The data Gladwell took aim at wasn’t strictly the stuff generated by market researchers. He set his sights on data that acquires a level of public credibility and a life of its own: college and hospital rankings, SAT scores, and so on. It’s precisely because this data is so uncritically accepted that it requires particular interrogation and scrutiny. As Gladwell puts it, “Numbers have an ideology. Numbers have dubious pasts.”
The reason data like the US News college rankings has a public life is that It’s simple and clear enough to be useful. It prompts what the behavioural scientist Gerd Gigerenzer calls a “good enough” decision – one that satisfies our fast, intuitive System 1 minds without us overly engaging the more critical System 2. But look into the roots of those rankings and you find a host of questionable assumptions.
Gladwell highlighted the fact that the single biggest contributor to the rankings is academic reputation – which is provided by surveying the presidents of other colleges and asking them to rate their peers. Unsurprisingly, they don’t know much about all their peers, so to fill in the gaps they turn to the most readily available source of information… the US News college rankings. D’oh!
This kind of circularity in the data is matched only by the perverse incentives the ranking creates. Any metric that becomes currency risks this – if people see a system, they will game it – but Gladwell highlighted one particular bad outcome. Over a third of the weighting in the US News algorithm is devoted to resources – how much colleges raise and spend. Basically, the richer a college is, the higher up the list it goes. The result? Spiralling college fees, outpacing inflation in every other sector of the economy. The combination of bad financial incentives and circular reputation metrics skews the rankings terribly, Gladwell says – apply different criteria, and different schools emerge as providing much better value in terms of outcomes.
Gladwell used LSAT scores to underpin his new law school rankings – topped by the University of Chicago and featuring such apparently unlikely candidates as the University of Alabama. But for his next trick, he pulled the rug out from under LSAT too, focusing on the decision to make the law exam a three-hour exam, turning what ought to be a “power test” (focusing on ability) into a “speed test” (focusing on quick thinking). Why was there a three-hour limit? Because there always had been, according to Gladwell.
This for me was the weakest section of the talk. It’s disingenuous to pretend there are no reasons for imposing time pressure on students whose fitness for a pressured career you’re assessing. They may not be good reasons and a timed exam may not be the best way of measuring knowledge and response to pressure simultaneously, but to imply it’s simply arbitrary is misleading.
The talk was on steadier ground in the third section, with Gladwell fired up about his “geekiest” charts yet. This was about predicting and preventing drop-out rates in STEM subjects, a topic dear to Gladwell’s heart (and a generally acknowledged problem by American tech leaders). What predicts drop-out rates? The answer turned out not to be ‘global’ ability (as measured by SAT scores), but ‘local’ ability (as measured by class ranking). Whether you’re at Harvard, MIT, or a school well outside the top 30, if you’re in the top third of your class you are far more likely to complete a STEM degree than if you’re in the bottom third. Even average students – in the middle third – find themselves “crushed” by the sense they’re well behind their higher-achieving peers, and act accordingly.
Gladwell called this the “big fish small pond” effect – if you’re a student with a 600 SAT score, while you could make it into Harvard you have a much higher chance of escaping the class-ranking effect and completing your degree if you choose a school where you will excel. A more obvious conclusion might be that publishing class ranking scores at all is a fundamental mistake as it kills the enthusiasm of fully two-thirds of students, but the ideology of competition itself wasn’t on Gladwell’s target list, at least in this talk.
Like all of Gladwell’s work, this was a combination of great storytelling with data and conclusions that will spark plenty of controversy. How should we apply these ideas in the insight industry? As researchers, we’re constantly running up against embedded assumptions and legacy data – “Purchase intent” springs to mind - that endures simply because it’s ‘always been done that way’. We also have a bunch of newer, seductive metrics which appeal because they are incredibly simple, like Net Promoter Score, but which conceal a lot of hidden assumptions. Gladwell’s advice to interrogate the ideology and history of the measures we use is important: it’s what we should already have been doing.