A popular topic in executive education is the concept of confirmation bias, which is the theory that people do not fully analyze evidence that contradicts their preconceived notions of a situation. Most executive education books tend to address the same issues, such as realizing the need to go to experts when making decisions, figure out if the experts opinions’ are clouded by their own biases, and decide if the recommendations have strong foundations.
According to cognitive scientists there are two modes that humans think in: intuitive and reflective. Most of the time, humans stay in intuitive mode, which is the mode they use impressions, associations, and feelings in order to understand the world around them. The reflective mode is where humans take a deliberate look at the decisions they make. This is the method of thinking that needs to be prevalently used in a business practicing ERM because glossing over a risk as ordinary could jeopardize management’s view about the likelihood or impact of a risk event, which in turn may lead to unexpected risks. According to the authors of this Harvard Business School article, there is no way to avoid intuitive thought and cognitive biases within one’s self. Individuals cannot control their own biases and faulty logic, but humans have the ability to point it out in other people very well. For example, if one overlooks something, one’s significant other is probably very quick to point it out. It is best to work these decisions out as a team, rather than just setting a “safety margin” by oneself or even with a team. So the only hope for solid decision making is to do it in groups.
The authors, Daniel Kahneman, Dan Lovallo, and Olivier Sibony, identify twelve questions that should be asked before making a major decision in order to accurately assess a risk or opportunity and make the right decision regarding how to manage it. The decision maker should answer the first three questions. The team making recommendations should answer questions four through nine. The final questions, 10 to 12, should be asked when analyzing the final proposal of a major decision. This line of questioning isn’t recommended for the ordinary decisions of the business because of the time investment required to ask these questions and get the answers for them.
- Is there any reason to suspect motivated errors, or errors driven by the self-interest of the recommender? This question should not be asked to the recommending team, but should be thought through anyway. Would the results of the decision overly benefit the people making the recommendation? Decision makers should be especially aware of this when only one “realistic” decision is given.
- Have the people making the recommendation fallen in love with it? When one likes something, one exaggerates its benefits. When one hates something, one exaggerates its costs. One should be aware of strong emotional ties to a decision. This may be especially relevant when analyzing strategic options. The recommender maybe biased towards under-describing the associated risks.
- Were there any dissenting opinions within the recommending team? Knowing this will allow decision makers to know if there was a case of groupthink going on. When taking on complex problems, there are many paths a company can take. There is no way that when one has a number of people in the room that they all come to the same conclusion of a complex problem, especially if they come from different “silos” of risk thinking. Analyzing any strong dissenting views, especially about the likelihood and impact of a potential risk, should be considered.
- Could the diagnosis of the situation be overly influenced by salient analogies? Are past successes of “similar” problems clouding the judgment of the recommending team? Is this problem really similar to ones handled in the past? This is especially important as organizations think through the viability of prior risk responses used to manage risks. Caution is warranted to ensure one is not overconfident in historical risk treatment options.
- Have credible alternatives been considered? When making a strong decision, all possible alternatives should be evaluated if time allows for that. A good way to make sure this happens is to have members submit their second and third choices for a way to handle the situation.
- If you had to make this decision again in a year, what information would you want and can you predict what will happen with this decision? Looking at the long-term results of a decision, instead of simply the short-term, gives a fuller picture of the magnitude of the decision and its far-reaching implications.
- Do you know where the numbers came from? Going back to intuitive thinking and confirmation bias, usually the first numbers a team sees will be the ones they believe. Did the team as a whole go back and verify the validity of those numbers? Do they know the source of the numbers they are basing the decision on?
- Can you see a halo effect? It is hard to shake appearances. If a team sees the first option of attack is strong, they will probably stick with that.
- Are the people making the recommendation overly attached to past decisions? Are people basing this decision too heavily on what has happened in the past? Often if a risky decision has burned someone before, that person is not willing to take a step out in faith and risk getting burned again. The same can be true for those with a better history in decision making.
- Is the base case overly optimistic? Forecasts are very prone to being excessively optimistic. Often times decision making teams focus only on inside the walls of the company ignoring things like similar decisions made by other companies and the results of those decisions, along with considering the overall economy.
- Is the worst case bad enough? Often times, the worst case scenarios drawn up by strategy teams are not bad enough. A way to avoid this is to do a “pre-mortem” analysis where a team imagines up the worst case scenario they have drawn up and think of something that could make the situation even worse.
- Is the recommending team overly cautious? This is just the opposite of number eleven. Is the team imagining the worst will definitely happen but not giving any considerations to success? The team may be trying to avoid losing anything and having a bad image with management. Remind them of the economic theory of “opportunity cost” if this becomes a problem.
You can purchase the full article at Harvard Business Review.