What do the 1942 Battle of Midway, the 1986 Space Shuttle Challenger explosion, and 2009 Air France crash have in common?
According to MIT Sloan senior lecturer Steve Spear, each of those significant events are an example of a team’s failure to be flexible and solve problems on the fly. In his recent paper “Tripwires: When we might learn and where we do not” [PDF], Spear identifies three areas where teams can fail, and what steps to take to ensure a clear path to success.
“Perhaps teams fail not because they were all that much worse than other teams nor were they having a particularly unusual ‘off day,’” said Spear, author of The High Velocity Edge and teacher of the executive education course Creating High Velocity Organizations. “Rather, they found themselves ill-prepared for the situations because not enough was learned individually and collectively at three critical junctures leading up to the crisis: during planning, during preparation and rehearsal, and during execution of particular ‘evolutions.’”
Planning. A successful organization must learn fast, Spear said, and the only way to learn is by being stress tested by red teams.
“The point is to identify oversights and flaws before important (and often irreversible) commitments of time and resources have been made,” Spear said.
In the case of the Japanese Navy, the admiralty ignored the results of their junior officers during red team testing. The junior officers had fought as Americans and won, but leaders assumed the junior officers simply didn’t understand the test, and left the navy vulnerable during actual warfare.
In the corporate world, senior leadership must set reminders that it’s better to solve a problem or fix a wrong process before changes get overly complicated or expensive.
“The first thing is energizing people to recognize that something isn’t working perfectly, [and feel] it’s their right and responsibility to call that out,” Spear said.
Preparation. The 2009 Air France crash that killed all 228 passengers is an example of a team failing to prepare for any situation. Researchers studying the crash determined that frozen equipment prompted the plane’s manual override. The pilots on board were not prepared to take over for the plane’s computers and analyze data normally left to a machine, Spear said.
“Our brains are wondrous at creative thinking, problem solving, innovation, and invention,” Spear said. “The problem is that they’re wicked slow at all that stuff, and more often than not, the situations for which they have to prepare happen wicked fast.”
Along with planning for various scenarios, it’s important to prepare for them, so that if something goes wrong managers aren’t figuring things out on the fly, Spear said.
Execution. When a person or team executes an action that doesn’t go as expected, ideally what went wrong would be recognized and reconsidered. But more often, Spear said, the person or team develops a workaround.
In the moment the job might get done, Spear said, but this “normalized deviance” needs to be avoided, or else there is a slim chance of correcting irregularities in the future.
Despite evidence that O-rings in the space shuttle rockets were likely to crack in cold weather, NASA decided to launch the Challenger on its fatal flight. Less than 20 years later, the Columbia was lost due to the same “normalized deviance," Spear said.