- Understand the systematic biases that affect our judgment and decision making.
- Develop strategies for making better decisions.
- Experience some of the biases through sample decisions.
Every day you have the opportunity to make countless decisions: should you eat dessert, cheat on a test, or attend a sports event with your friends. If you reflect on your own history of choices you will realize that they vary in quality; some are rational and some are not.
In his Nobel Prize–winning work, psychologist Herbert Simon (1957; March & Simon, 1958) argued that our decisions are bounded in their rationality. According to the bounded rationality framework, human beings try to make rational decisions (such as weighing the costs and benefits of a choice) but our cognitive limitations prevent us from being fully rational. Time and cost constraints limit the quantity and quality of the information that is available to us. Moreover, we only retain a relatively small amount of information in our usable memory. And limitations on intelligence and perceptions constrain the ability of even very bright decision makers to accurately make the best choice based on the information that is available.
About 15 years after the publication of Simon’s seminal work, Tversky and Kahneman (1973, 1974; Kahneman & Tversky, 1979) produced their own Nobel Prize–winning research, which provided critical information about specific systematic and predictable biases, or mistakes, that influence judgment (Kahneman received the prize after Tversky’s death). The work of Simon, Tversky, and Kahneman paved the way to our modern understanding of judgment and decision making. And their two Nobel prizes signaled the broad acceptance of the field of behavioral decision research as a mature area of intellectual study.
What Would a Rational Decision Look Like?
Imagine that during your senior year in college, you apply to a number of doctoral programs, law schools, or business schools (or another set of programs in whatever field most interests you). The good news is that you receive many acceptance letters. So, how should you decide where to go? Bazerman and Moore (2013) outline the following six steps that you should take to make a rational decision:
(1) define the problem (i.e., selecting the right graduate program)
(2) identify the criteria necessary to judge the multiple options (location, prestige, faculty, etc.)
(3) weight the criteria (rank them in terms of importance to you)
(4) generate alternatives (the schools that admitted you)
(5) rate each alternative on each criterion (rate each school on each criteria that you identified
(6) compute the optimal decision. Acting rationally would require that you follow these six steps in a fully rational manner.
I strongly advise people to think through important decisions such as this in a manner similar to this process. Unfortunately, we often don’t. Many of us rely on our intuitions far more than we should. And when we do try to think systematically, the way we enter data into such formal decision-making processes is often biased.
Fortunately, psychologists have learned a great deal about the biases that affect our thinking. This knowledge about the systematic and predictable mistakes that even the best and the brightest make can help you identify flaws in your thought processes and reach better decisions.
Biases in Our Decision Process
Simon’s concept of bounded rationality taught us that judgment deviates from rationality, but it did not tell us how judgment is biased. Tversky and Kahneman’s (1974) research helped to diagnose the specific systematic, directional biases that affect human judgment. These biases are created by the tendency to short-circuit a rational decision process by relying on a number of simplifying strategies, or rules of thumb, known as heuristics. Heuristics allow us to cope with the complex environment surrounding our decisions. Unfortunately, they also lead to systematic and predictable biases.
To highlight some of these biases please answer the following three quiz items:
Problem 1 (adapted from Alpert & Raiffa, 1969):
Listed below are 10 uncertain quantities. Do not look up any information on these items. For each, write down your best estimate of the quantity. Next, put a lower and upper bound around your estimate, such that you are 98 percent confident that your range surrounds the actual quantity. Respond to each of these items even if you admit to knowing very little about these quantities.
- The first year the Nobel Peace Prize was awarded
- The date the French celebrate “Bastille Day”
- The distance from the Earth to the Moon
- The height of the Leaning Tower of Pisa
- Number of students attending Oxford University (as of 2014)
- Number of people who have traveled to space (as of 2013)
- 2012-2013 annual budget for the University of Pennsylvania
- Average life expectancy in Bangladesh (as of 2012)
- World record for pull-ups in a 24-hour period
- Number of colleges and universities in the Boston metropolitan area
Problem 2 (adapted from Joyce & Biddle, 1981):
We know that executive fraud occurs and that it has been associated with many recent financial scandals. And, we know that many cases of management fraud go undetected even when annual audits are performed. Do you think that the incidence of significant executive-level management fraud is more than 10 in 1,000 firms (that is, 1 percent) audited by Big Four accounting firms?
- Yes, more than 10 in 1,000 Big Four clients have significant executive-level management fraud.
- No, fewer than 10 in 1,000 Big Four clients have significant executive-level management fraud.
What is your estimate of the number of Big Four clients per 1,000 that have significant executive-level management fraud? (Fill in the blank below with the appropriate number.)
________ in 1,000 Big Four clients have significant executive-level management fraud.
Problem 3 (adapted from Tversky & Kahneman, 1981):
Imagine that the United States is preparing for the outbreak of an unusual avian disease that is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. Assume that the exact scientific estimates of the consequences of the programs are as follows.
- Program A: If Program A is adopted, 200 people will be saved.
- Program B: If Program B is adopted, there is a one-third probability that 600 people will be saved and a two-thirds probability that no people will be saved.
Which of the two programs would you favor?
On the first problem, if you set your ranges so that you were justifiably 98 percent confident, you should expect that approximately 9.8, or nine to 10, of your ranges would include the actual value. So, let’s look at the correct answers:
- 14th of July
- 384,403 km (238,857 mi)
- 56.67 m (183 ft)
- 22,384 (as of 2014)
- 536 people (as of 2013)
- $6.007 billion
- 70.3 years (as of 2012)
Count the number of your 98% ranges that actually surrounded the true quantities. If you surrounded nine to 10, you were appropriately confident in your judgments. But most readers surround only between three (30%) and seven (70%) of the correct answers, despite claiming 98% confidence that each range would surround the true value. As this problem shows, humans tend to be overconfident in their judgments.
Regarding the second problem, people vary a great deal in their final assessment of the level of executive-level management fraud, but most think that 10 out of 1,000 is too low. When I run this exercise in class, half of the students respond to the question that I asked you to answer. The other half receive a similar problem, but instead are asked whether the correct answer is higher or lower than 200 rather than 10. Most people think that 200 is high. But, again, most people claim that this “anchor” does not affect their final estimate. Yet, on average, people who are presented with the question that focuses on the number 10 (out of 1,000) give answers that are about one-half the size of the estimates of those facing questions that use an anchor of 200. When we are making decisions, any initial anchor that we face is likely to influence our judgments, even if the anchor is arbitrary. That is, we insufficiently adjust our judgments away from the anchor.
Turning to Problem 3, most people choose Program A, which saves 200 lives for sure, over Program B. But, again, if I was in front of a classroom, only half of my students would receive this problem. The other half would have received the same set-up, but with the following two options:
- Program C: If Program C is adopted, 400 people will die.
- Program D: If Program D is adopted, there is a one-third probability that no one will die and a two-thirds probability that 600 people will die.
Which of the two programs would you favor?
Careful review of the two versions of this problem clarifies that they are objectively the same. Saving 200 people (Program A) means losing 400 people (Program C), and Programs B and D are also objectively identical. Yet, in one of the most famous problems in judgment and decision making, most individuals choose Program A in the first set and Program D in the second set (Tversky & Kahneman, 1981). People respond very differently to saving versus losing lives—even when the difference is based just on the “framing” of the choices.
The problem that I asked you to respond to was framed in terms of saving lives, and the implied reference point was the worst outcome of 600 deaths. Most of us, when we make decisions that concern gains, are risk averse; as a consequence, we lock in the possibility of saving 200 lives for sure. In the alternative version, the problem is framed in terms of losses. Now the implicit reference point is the best outcome of no deaths due to the avian disease. And in this case, most people are risk seeking when making decisions regarding losses.
These are just three of the many biases that affect even the smartest among us. Other research shows that we are biased in favor of information that is easy for our minds to retrieve, are insensitive to the importance of base rates and sample sizes when we are making inferences, assume that random events will always look random, search for information that confirms our expectations even when disconfirming information would be more informative, claim a priori knowledge that didn’t exist due to the hindsight bias, and are subject to a host of other effects that continue to be developed in the literature (Bazerman & Moore, 2013).
My colleagues and I have recently added two other important bounds to the list. Chugh, Banaji, and Bazerman (2005) and Banaji and Bhaskar (2000) introduced the concept of bounded ethicality, which refers to the notion that our ethics are limited in ways we are not even aware of ourselves. Second, Chugh and Bazerman (2007) developed the concept of bounded awareness to refer to the broad array of focusing failures that affect our judgment, specifically the many ways in which we fail to notice obvious and important information that is available to us.
A final development is the application of judgment and decision-making research to the areas of behavioral economics, behavioral finance, and behavioral marketing, among others. In each case, these fields have been transformed by applying and extending research from the judgment and decision-making literature.
Fixing Our Decisions
Ample evidence documents that even smart people are routinely impaired by biases. Early research demonstrated, unfortunately, that awareness of these problems does little to reduce bias (Fischhoff, 1982). The good news is that more recent research documents interventions that do help us overcome our faulty thinking (Bazerman & Moore, 2013).
One critical path to fixing our biases is provided in Stanovich and West’s (2000) distinction between System 1 and System 2 decision making. System 1 processing is our intuitive system, which is typically fast, automatic, effortless, implicit, and emotional. System 2 refers to decision making that is slower, conscious, effortful, explicit, and logical. The six logical steps of decision making outlined earlier describe a System 2 process.
Clearly, a complete System 2 process is not required for every decision we make. In most situations, our System 1 thinking is quite sufficient; it would be impractical, for example, to logically reason through every choice we make while shopping for groceries. But, preferably, System 2 logic should influence our most important decisions. Nonetheless, we use our System 1 processes for most decisions in life, relying on it even when making important decisions.
The key to reducing the effects of bias and improving our decisions is to transition from trusting our intuitive System 1 thinking toward engaging more in deliberative System 2 thought. Unfortunately, the busier and more rushed people are, the more they have on their minds, and the more likely they are to rely on System 1 thinking (Chugh, 2004). The frantic pace of professional life suggests that executives often rely on System 1 thinking (Chugh, 2004).
Fortunately, it is possible to identify conditions where we rely on intuition at our peril and substitute more deliberative thought. One fascinating example of this substitution comes from journalist Michael Lewis’ (2003) account of how Billy Beane, the general manager of the Oakland Athletics, improved the outcomes of the failing baseball team after recognizing that the intuition of baseball executives was limited and systematically biased and that their intuitions had been incorporated into important decisions in ways that created enormous mistakes. Lewis (2003) documents that baseball professionals tend to overgeneralize from their personal experiences, be overly influenced by players’ very recent performances, and overweigh what they see with their own eyes, despite the fact that players’ multiyear records provide far better data. By substituting valid predictors of future performance (System 2 thinking), the Athletics were able to outperform expectations given their very limited payroll.
Another important direction for improving decisions comes from Thaler and Sunstein’s (2008) book Nudge: Improving Decisions about Health, Wealth, and Happiness. Rather than setting out to debias human judgment, Thaler and Sunstein outline a strategy for how “decision architects” can change environments in ways that account for human bias and trigger better decisions as a result. For example, Beshears, Choi, Laibson, and Madrian (2008) have shown that simple changes to defaults can dramatically improve people’s decisions. They tackle the failure of many people to save for retirement and show that a simple change can significantly influence enrollment in 401(k) programs. In most companies, when you start your job, you need to proactively sign up to join the company’s retirement savings plan. Many people take years before getting around to doing so. When, instead, companies automatically enroll their employees in 401(k) programs and give them the opportunity to “opt out,” the net enrollment rate rises significantly. By changing defaults, we can counteract the human tendency to live with the status quo.
Similarly, Johnson and Goldstein’s (2003) cross-European organ donation study reveals that countries that have opt-in organ donation policies, where the default is not to harvest people’s organs without their prior consent, sacrifice thousands of lives in comparison to opt-out policies, where the default is to harvest organs. The United States and too many other countries require that citizens opt in to organ donation through a proactive effort; as a consequence, consent rates range between 4.25%–44% across these countries. In contrast, changing the decision architecture to an opt-out policy improves consent rates to 85.9% to 99.98%. Designing the donation system with knowledge of the power of defaults can dramatically change donation rates without changing the options available to citizens. In contrast, a more intuitive strategy, such as the one in place in the United States, inspires defaults that result in many unnecessary deaths.
Take a Quiz
An (optional) quiz is available for this chapter at the Noba Project’s website.
- Are the biases in this module a problem in the real world?
- How would you use this module to be a better decision maker?
- Can you see any biases in today’s newspaper?
- The bias to be affected by an initial anchor, even if the anchor is arbitrary, and to insufficiently adjust our judgments away from that anchor.
- The systematic and predictable mistakes that influence the judgment of even very talented human beings.
- Bounded awareness
- The systematic ways in which we fail to notice obvious and important information that is available to us.
- Bounded ethicality
- The systematic ways in which our ethics are limited in ways we are not even aware of ourselves.
- Bounded rationality
- Model of human behavior that suggests that humans try to make rational decisions but are bounded due to cognitive limitations.
- Bounded self-interest
- The systematic and predictable ways in which we care about the outcomes of others.
- Bounded willpower
- The tendency to place greater weight on present concerns rather than future concerns.
- The bias to be systematically affected by the way in which information is presented, while holding the objective information constant.
- cognitive (or thinking) strategies that simplify decision making by using mental short-cuts
- The bias to have greater confidence in your judgment than is warranted based on a rational assessment.
- System 1
- Our intuitive decision-making system, which is typically fast, automatic, effortless, implicit, and emotional.
- System 2
- Our more deliberative decision-making system, which is slower, conscious, effortful, explicit, and logical.
- Book: Bazerman, M. H., & Moore, D. (2013). Judgment in managerial decision making (8th ed.). John Wiley & Sons Inc.
- Book: Kahneman, D. (2011) Thinking, Fast and Slow. New York, NY: Farrar, Straus and Giroux.
- Book: Thaler, R. H., & Sunstein, C. R. (2008). Nudge: Improving Decisions about Health, Wealth, and Happiness. New Haven, CT: Yale University Press.
- Alpert, M., & Raiffa, H. (1969). A progress report on the training of probability assessors. Unpublished Report.
- Banaji, M. R., & Bhaskar, R. (2000). Implicit stereotypes and memory: The bounded rationality of social beliefs. In D. L. Schacter & E. Scarry (Eds.), Memory, brain, and belief (pp. 139–175). Cambridge, MA: Harvard University Press.
- Bazerman, M. H., & Moore, D. (2013). Judgment in managerial decision making (8th ed.). John Wiley & Sons Inc.
- Beshears, J., Choi, J. J., Laibson, D., & Madrian, B. C. (2008). The importance of default options for retirement saving outcomes: Evidence from the United States. In S. J. Kay & T. Sinha (Eds.), Lessons from pension reform in the Americas (pp. 59–87). Oxford: Oxford University Press.
- Chugh, D. (2004). Societal and managerial implications of implicit social cognition: Why milliseconds matter. Social Justice Research, 17(2), 203–222.
- Chugh, D., & Bazerman, M. H. (2007). Bounded awareness: What you fail to see can hurt you. Mind & Society, 6(1), 1–18.
- Chugh, D., Banaji, M. R., & Bazerman, M. H. (2005). Bounded ethicality as a psychological barrier to recognizing conflicts of interest. In D. Moore, D. M. Cain, G. Loewenstein, & M. H. Bazerman (Eds.), Conflicts of Interest (pp. 74–95). New York, NY: Cambridge University Press.
- Fischhoff, B. (1982). Debiasing. In D. Kahneman, P. Slovic, & A. Tversky (Eds.), Judgment under uncertainty: Heuristics and biases (pp. 422–444). New York, NY: Cambridge University Press.
- Johnson, E. J., & Goldstein, D. (2003). Do defaults save lives? Science 302(5649), 1338–1339.
- Joyce, E. J., & Biddle, G. C. (1981). Are auditors’ judgments sufficiently regressive? Journal of Accounting Research, 19(2), 323–349.
- Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decision under risk. Econometrica, 47(2), 263–292.
- Lewis, M. (2003). Moneyball: The art of winning an unfair game. New York, NY: W.W. Norton & Company Ltd.
- March, J. G., & Simon, H. A. (1958). Organizations. Oxford: Wiley.
- Simon, H. A. (1957). Models of man, social and rational: Mathematical essays on rational human behavior in a social setting. New York, NY: John Wiley & Sons.
- Stanovich, K. E., & West, R. F. (2000). Individual differences in reasoning: Implications for the rationality debate? Behavioral and Brain Sciences, 23, 645–726.
- Thaler, R. H. (2000). From homo economicus to homo sapiens. Journal of Economics Perspectives, 14, 133–141.
- Thaler, R. H., & Sunstein, C. R. (2008). Nudge: Improving decisions about health, wealth, and happiness. New Haven, CT: Yale University Press.
- Tversky, A., & Kahneman, D. (1981). The framing of decisions and the psychology of choice. Science, New Series, 211(4481), 453–458.
- Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, New Series, 185(4157), 1124–1131.
- Tversky, A., & Kahneman, D. (1973). Availability: A heuristic for judging frequency and probability. Cognitive Psychology, 5(2), 207–232.
Max H. Bazerman
Max H. Bazerman is the Jesse Isidor Straus Professor at the Harvard Business School and the co-director of the Center for Public Leadership at the Harvard Kennedy School of Government. Max’s awards include a 2006 honorary doctorate from the University of London (London Business School), the Lifetime Achievement Award from the Aspen Institute, and being named as one of Ethisphere’s 100 Most Influential in Business Ethics. Details at www.people.hbs.edu/mbazerman.