You’re keen on encouraging innovation and letting a thousand flowers bloom, but how do you sort the weeds from the seeds?
Despite their best intentions, executives fall prey to cognitive and organizational biases that get in the way of good decision making. In this series, we highlight some of them and offer a few effective ways to address them.
Knowing when to kill a project
In the past six months, the product-development group in your company has generated a dozen concepts that would breathe new life into existing brands—for instance, “foaming” variations of the company’s established line of bar soaps. In fact, the team is coming up with more promising ideas than there is funding to support them. These would be small investments relative to the rest of the company’s overall R&D expenditures, but altogether they would account for a significant percentage of the limited resources tabbed for product development. As the head of R&D, you’re keen on encouraging this sort of enthusiasm for innovation and letting a thousand flowers bloom, but how do you sort the weeds from the seeds?
Multiple studies have indicated the degree to which business leaders are loath to kill projects. One such study developed by IESE Business School Professor Luis Huete found that companies and individuals that have had a track record of success have a harder time killing projects, because they carry with them an ingrained belief that they can turn everything into gold, so long as everyone works hard enough.1 Managers under these circumstances attribute more credit than is warranted to the person making or supporting an investment proposal than to the merits of the proposal itself. Compounding this belief is the sunk-cost fallacy, in which managers who are assessing projects lend more weight to the costs they’ve already incurred from an initiative rather than the costs to come. Not wanting to see past efforts go to waste, they put their pruning shears away and let projects grow indefinitely.
Managers who are assessing projects lend more weight to the costs they’ve already incurred from an initiative rather than the costs to come.
One global producer of baking ingredients, oils and spreads, and other types of food designated a full-time “project killer”—someone with deep knowledge of both food technology and the business aspects of the industry—to rein in project creep.
Researchers at the food company were motivated to find the next “home run” product. But over time, the number of R&D investments was disproportionate to the value being generated from them. The project killer sits within the R&D team at the company but loosely reports to different functions within the business. He maintains a database of all active projects, noting areas of repeated inefficiency, or lack of success, or lack of opportunity. Using these data, he builds a dispassionate case for why a project should continue (under changed circumstances) or be killed. The project killer’s review of the database considers the costs and benefits of all projects in play, not just individual initiatives, and this happens on a rolling basis, not as part of a meeting or event. As such, there are few formal opportunities for project ombudsmen to re-pitch failing initiatives.
In the three years since it designated a project killer, the food company has been able to cull its portfolio—from more than 560 projects to just over 200. And the effect on profitability has been overwhelmingly positive.
The project killer role is a better fit in some scenarios than in others—useful in fast-moving consumer-goods companies, for instance, but not necessarily in the film industry, or in oil and gas companies, where production lead times are very long. Still, the theory behind this approach—mandating objectivity—is worth noting, regardless of company or sector. Companies absolutely need to invest in new ideas. They must be entrepreneurial and imaginative. But they also need to adopt mechanisms that take some of the emotion out of their resource-allocation decisions.