During World War II, American military personnel noticed that some parts of planes seemed to be hit more often than other parts. They analysed the bullet holes in the returning planes, and set out a programme to have these areas reinforced, so that they would be able to withstand enemy fire better.
This may seem natural enough but it also contains a fundamental error. It’s called selection bias.
Assume, for the sake of the argument, that planes got hit in all sorts of places. If the areas which formed vital parts of the machine were hit (call it part A), the aeroplane was unlikely to make it back to base; it would crash. If the bullets hit the plane in parts which were not so vital (part B), the plane was much more likely to at least make it back home.
Then, military personnel would inspect the plane and conclude “darn, this plane also got hit in part B! We’d better strengthen those places…”
Of course, the military personnel were wrong. Planes got hit in part A just as often as in part B; it’s just that the first ones never made it back home. What’s worse, strengthening part B was exactly the wrong thing to do: those parts weren’t so vital; it is part A which needed strengthening!
This is why we call it “selection bias”; we only see a selection of the outcomes, and therefore draw false conclusions. And the world of business is full of it.
Consider, for example, the popular notion that innovation projects require diverse, cross-functional teams. This notion exists because if we analyse some very path-breaking innovation projects, they were often staffed by such teams. However, it has been suggested* that diverse, cross-functional teams also often created the biggest failures of all! However, such failures never resulted in any products… Therefore, if we (only) examine the projects which actually resulted in successful innovations, it seems the diverse cross-functional teams did much better. Yet, on average, the homogeneous teams – although not responsible for the few really big inventions – might have done better; always producing a reliable, good set of results.
Similarly, we applaud CEOs who are bold and risk-taking, using their intuition rather than careful analysis, such as Jack Welch. However, risk, by definition, leads some to succeed but it also leads quite a few of them to fail, and slip into oblivion. Those CEOs we never consider; it is the risk-takers that happen to come out on top that we admire and aspire to. Yet, if we’d be able to see the full picture, of all CEOs, innovation teams, and fighter planes, we just might have reached a very different conclusion.
* See the work of Professor Jerker Denrell from Stanford Business School: Some of the examples used in this text are partly based on his work.
Great explanation of selection bias, thanks.