This site is part of the Informa Connect Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 3099067.

Strategy & Innovation
search
Brand strategy

Does Crowdsourcing Need “Rethinking”?

Posted by on 07 November 2017
Share this article

An article in the latest issue of Harvard Business Review describes a product development study by Reto Hofstetter, Suleiman Aryobsei and Andreas Herrmann (Journal of Product Innovation Management, forthcoming). What caught my attention was the article’s title: “Rethinking Crowdsourcing.”

Why does crowdsourcing need “rethinking”?

Hofstetter and co-authors reviewed 87 crowdsourcing projects run by 18 companies on Atizo360o, a Swiss-based platform. The projects in question appear to have been typical “idea generation campaigns” that asked consumers to come up with new product development ideas. For example, in one of the campaigns, consumers were asked to propose new flavors for drinks manufactured by a Swiss soft drink company.

Each campaign analyzed by the Hofstetter team generated on average 358 responses. Because evaluating such considerable number of ideas is time- and resource-consuming, managers at the companies took advantage of the Atizo platform’s functionality allowing participants to “like” each other submissions. So now, instead of sorting through all ideas, managers could focus only on the most “likable”—at least, as a first screen.

The Hofstetter team has identified a serious flaw in the process: the apparent value of some ideas was overinflated by reciprocal “likes” by connected contributors who would prop up each other’s contributions. When submitted proposals were assessed by independent evaluators, no correlation has been found between most “likable” ideas and those that led to successful products.

Hofstetter and co-authors have concluded that “[o]neline consumer votes are unreliable indicators of actual idea quality.” The HBR article’s own verdict was even more damning: “It can be unwise to rely on the crowd.” Ouch!

To those folks who believe it’s unwise to rely on the crowd—or, worse, that “crowds are stupid”—I have a clear message: crowdsourcing doesn’t need rethinking. What needs rethinking is the way we use it. Crowdsourcing is, first and foremost, a question that you ask a crowd; the quality of the question is the most crucial factor determining the quality of the answer. You ask the crowd a smart question–you have a chance to get a smart answer. You ask the crowd a stupid question–the answer will almost certainly be stupid.

I have two specific comments on the HBR article.

  1. Crowdsourcing “ideas” is a bad idea.

Characteristically, all criticism of crowdsourcing—whether blaming it for the downfall of Quirky, choosing the wrong name for a research ship or the low-quality product ideas in the above study—is targeted against a particular “idea generation” version of it, which I call the bottom-up model of crowdsourcing. As I argued very recently, the bottom-up model has multiple flaws. One of them is that the burden of evaluating submitted ideas usually falls on business units that already have a full load of their own research projects. Faced with the need to find resources for “newcomers,” managers begin to cut corners and push the responsibility of evaluating submitted proposals back to the crowd—exactly as described above. Having only a vague (at best) understanding of what the managers really need, the crowd chooses the most “likable”—and usually very conventional or even trivial—ideas. Hardly surprising therefore is that the efficiency of “idea generation” campaigns is extremely low: barely 1-2% of the original ideas lead to eventual implementation.

There is a plausible alternative to the bottom-up approach: the top-down model. In the top-down model of crowdsourcing, the focus is on problems. These problems are identified and formulated by the managers who then ask internal or external crowds to find solutions to these problems. This approach is remarkably efficient. For example, InnoCentive, a crowdsourcing platform utilizing the top-down model, boasts up to 85% success rate of their projects.

I’m not saying that the bottom-up model has no right to exist. In innovation-mature organizations it can be successful–and I covered such a success story in the past. But for organizations that are at the very beginning of the innovation journey—and let’s face it, we’re talking about most of organizations—the top-down must be a model of choice.

  1. Voting for “ideas” is a bad idea, too.

Many folks seem to believe that they do crowdsourcing when they join online hundreds or even thousands of other folks and start exchanging “ideas” and opinions about them. (“Crowdsourcing on Facebook” is becoming a cliché.) Unfortunately, these folks confuse crowdsourcing with another problem-solving tool: brainstorming. Adding to this confusion is the fact that almost every commercially available “idea management” software provides functionalities allowing contributors to comment on each other’s ideas and vote for them.Thinker

But crowdsourcingisdifferent from brainstorming in one important aspect: it requires independence of opinions, a feature of crowdsourcing underscored by James Surowiecki in his classic book “The Wisdom of Crowds.” When you run a crowdsourcing campaign, you should make sure that the members of your crowd, either individuals or small teams, provide their input independently of the opinions of others. It’s this aspect of crowdsourcing that results in the delivery of highly diversified, original and even unexpected solutions to the problem--as opposed to brainstorming that almost always ends up with a group reaching a consensus. That’s why I completely agree with the Hofstetter team that the number of votes is not an indicator of the quality of ideas; moreover, I believe that voting for ideas has a net negative effect on their quality.

However, as I mentioned before, managers resort to voting to relieve the pain of the evaluation process. Is there anything they can do to reduce this burden while maintaining the idea quality? Yes. Managers must start at the other side of the “crowdsourcing equation”: the question. Instead of asking crowds for open-ended suggestions (“Bring us something and we’ll tell you whether we like it or not”), managers must be very precise about what kind of ideas they’re looking for. For example, they can provide a list of specific (and, if appropriate, quantitative) requirements any successful idea must meet. Even more useful would be a request to conclude every submission with a point-by-point account how the proposed idea matches every listed requirement. This may not automatically result in increased quality of ideas, but it will undoubtedly help managers easily weed out a low-quality “noise.”

Reiterating my key point, crowdsourcing doesn’t need “rethinking.” It’s an extremely powerful problem-solving tool, but as any other tool, it requires knowledge and experience to be properly used. Those who know how to use it will succeed in harnessing the proverbial wisdom of crowds. Those who don’t, won’t. It’s this simple.

p.s. To subscribe to my monthly newsletter on crowdsourcing, go to http://eepurl.com/cE40az.

Image was provided by Tatiana Ivanov

Eugene Ivanov writes about crowdsourcing, open innovation and innovation in general. He blogs at Innovation Observer and tweets @eugeneivanov101.

Share this article