29
Learning Outcomes
By the end of this topic you will be able to:
- Describe problem solving strategies
- Define algorithm and heuristic
- Explain some common roadblocks to effective problem solving
- Understand the systematic biases that affect our judgment and decision making
- Develop strategies for making better decisions
- Experience some of the biases through sample decisions
PROBLEM SOLVING
People face problems every day—usually, multiple problems throughout the day. Sometimes these problems are straightforward: To double a recipe for pizza dough, for example, all that is required is that each ingredient in the recipe be doubled. Sometimes, however, the problems we encounter are more complex. For example, say you have a work deadline, and you must mail a printed copy of a report to your supervisor by the end of the business day. The report is time-sensitive and must be sent overnight. You finished the report last night, but your printer will not work today. What should you do? First, you need to identify the type of problem and then apply a strategy for solving the problem.
Types of Problems
A problem that has clear starting point and a correct solution is referred to as well-defined problem. On the other hand, a problem that does not have have a single, correct answer is referred to as ill-defined problem. Consider the difference between these two examples:
A. The day after the day after tomorrow is four days before Tuesday. What day is it today?
B. Avi had his heart set on becoming an architect but he failed the entrance exam. What should he do?
Even well-defined problems can be distinguished between those might be solved by methodically working through a series of steps (as with example A above) and those that require a realization or insight before they can be solved. Consider the following two examples:
C. Amy is standing behind Tina, and Tina is standing behind Amy. How is this possible?
D. A murderer is condemned to death. He has to choose between three rooms. The first is full of raging fires, the second is full of assassins with loaded guns, and the third is full of lions that haven’t eaten in 3 years. Which room is safest for him?
Other types of well-defined problems include problems of inducing structure, problems of arrangement, and problems of transformation. With a problem of inducing structure the solution lies with recognizing the relationship between the parts of the problem:
E. Begin is to End as Born is to: …birth …die …enough …complete
F. In the following sequence, what is X? 9, 12, 18, X, 43
With a problem of arrangement the task is to rearrange the components of the problem:
G. Which one of the five choices makes the best comparison? ANIMAL is to LAMINA as 46251 is to: 25641 – 26451 – 12654 – 51462 – 15264
H. If the words in the following questions were arranged in alphabetical order, then which word would come third: Jargon – Jaguar – Jacuzzi – Jail – Jazz
Finally, a problem of transformation requires performing a series of steps that allow one to move from an initial state to a final goal state. However, the specific transformations and their order need to be determined:
I. Imagine that you have been given a 5 gallon jug and a 3 gallon jug and have been instructed to obtain precisely 4 gallons. You have an unlimited supply of water and you may pour water into or out of either jug. How should you proceed?
J. A famous example of a problem of transformation involves the well-known Tower of Hanoi problem. Click on the following link to attempt to solve this problem using just 3 discs. If you increase the number of discs to 4 or 5 you will notice that the problem becomes significantly more challenging (and requires significantly more moves to solve).
Tower of Hanoi: https://www.mathsisfun.com/games/towerofhanoi.html
Note: Once you have attempted all 10 sample problems, scroll to the bottom of this page to check the solutions.
Problem-solving Strategies
When you are presented with a problem—whether it is a complex mathematical problem or a broken printer, how do you solve it? Before finding a solution to the problem, the problem must first be clearly identified. After that, one of many problem solving strategies can be applied, hopefully resulting in a solution.
A problem-solving strategy is a plan of action used to find a solution. Different strategies have different action plans associated with them (see Table 1). For example, a well-known strategy is trial and error. The old adage, “If at first you don’t succeed, try, try again” describes trial and error. In terms of your broken printer, you could try checking the ink levels, and if that doesn’t work, you could check to make sure the paper tray isn’t jammed. Or maybe the printer isn’t actually connected to your laptop. When using trial and error, you would continue to try different solutions until you solved your problem. Although trial and error is not typically one of the most time-efficient strategies, it is a commonly used one.
Table 1: Problem solving strategies
Method | Description | Example |
Trial and error | Continue trying different solutions until problem is solved | Restarting phone, turning off WiFi, turning off bluetooth in order to determine why your phone is malfunctioning |
Algorithm | Step-by-step problem-solving formula | Instruction booklet for assembling your new desk from Ikea |
Heuristic | General problem-solving framework | Working backwards; breaking a task into steps |
Another type of strategy is an algorithm. An algorithm is a problem-solving formula that provides you with step-by-step instructions used to achieve a desired outcome (Kahneman, 2011). You can think of an algorithm as a recipe with highly detailed instructions that produce the same result every time they are performed. Algorithms are used frequently in our everyday lives, especially in computer science. When you run a search on the Internet, search engines like Google use algorithms to decide which entries will appear first in your list of results. Facebook also uses algorithms to decide which posts to display on your newsfeed. Can you identify other situations in which algorithms are used?
A heuristic is another type of problem solving strategy. While an algorithm must be followed exactly to produce a correct result, a heuristic is a general problem-solving framework (Tversky & Kahneman, 1974). You can think of these as mental shortcuts that are used to solve problems. A “rule of thumb” is an example of a heuristic. Such a rule saves the person time and energy when making a decision, but despite its time-saving characteristics, it is not always the best method for making a rational decision. Different types of heuristics are used in different types of situations, but the impulse to use a heuristic occurs when one of five conditions is met (Pratkanis, 1989):
- When one is faced with too much information
- When the time to make a decision is limited
- When the decision to be made is unimportant
- When there is access to very little information to use in making the decision
- When an appropriate heuristic happens to come to mind in the same moment
Working backwards is a useful heuristic in which you begin solving the problem by focusing on the end result. Consider this example: You live in Vancouver and have been invited to a wedding at 5 PM on Friday in Abbotsford. Knowing that Highway 1 tends to back up during rush hour, you need to plan your route and time your departure accordingly. If you want to be at the wedding service by 4:45 PM, and it takes 1 hour to get to Abbotsford without traffic, what time should you leave your house? You use the working backwards heuristic to plan the events of your day on a regular basis, probably without even thinking about it.
Another useful heuristic is the practice of accomplishing a large goal or task by breaking it into a series of smaller steps. Students often use this common method to complete a large research project or long essay for school. For example, students typically brainstorm, develop a thesis or main topic, research the chosen topic, organize their information into an outline, write a rough draft, revise and edit the rough draft, develop a final draft, organize the references list, and proofread their work before turning in the project. The large task becomes less overwhelming when it is broken down into a series of small steps.
Additional Problem-solving Strategies
- Abstraction: solving the problem in a model of the system before applying it to the real system.
- Analogy: using a solution for a similar problem.
- Brainstorming: suggesting a large number of solutions and developing them until the best is found.
- Divide and conquer: breaking down a large, complex problem into smaller, solvable problems.
- Hypothesis testing: assuming a possible explanation to the problem and trying to prove (or, in some contexts, disprove) the assumption.
- Means-ends analysis: choosing an action at each step to move closer to the goal.
- Proof: try to prove that the problem cannot be solved. The point where the proof fails will be the starting point for solving it.
- Reduction: transforming the problem into another problem for which solutions exist.
- Root-cause analysis: identifying the cause of a problem.
Pitfalls to Problem Solving
Not all problems are successfully solved, however. What challenges stop us from successfully solving a problem? Albert Einstein once said, “Insanity is doing the same thing over and over again and expecting a different result.” Imagine a person in a room that has four doorways. One doorway that has always been open in the past is now locked. The person, accustomed to exiting the room by that particular doorway, keeps trying to get out through the same doorway even though the other three doorways are open. The person is stuck—but she just needs to go to another doorway, instead of trying to get out through the locked doorway. A mental set is where you persist in approaching a problem in a way that has worked in the past but is clearly not working now.
Functional fixedness is a type of mental set where you cannot perceive an object being used for something other than what it was designed for.
ACTIVITY
Take a look at this classic example of a functional fixedness problem: https://youtu.be/gaI7N6J3rAc
During the Apollo 13 mission to the moon, NASA engineers at Mission Control had to overcome functional fixedness to save the lives of the astronauts aboard the spacecraft. An explosion in a module of the spacecraft damaged multiple systems. The astronauts were in danger of being poisoned by rising levels of carbon dioxide because of problems with the carbon dioxide filters. The engineers found a way for the astronauts to use spare plastic bags, tape, and air hoses to create a makeshift air filter, which saved the lives of the astronauts.
ACTIVITY
Check out this Apollo 13 scene where the group of NASA engineers are given the task of overcoming functional fixedness: https://youtu.be/Z3csfLkMJT4
Researchers have investigated whether functional fixedness is affected by culture. In one experiment, individuals from the Shuar group in Ecuador were asked to use an object for a purpose other than that for which the object was originally intended. For example, the participants were told a story about a bear and a rabbit that were separated by a river and asked to select among various objects, including a spoon, a cup, erasers, and so on, to help the animals. The spoon was the only object long enough to span the imaginary river, but if the spoon was presented in a way that reflected its normal usage, it took participants longer to choose the spoon to solve the problem. (German & Barrett, 2005). The researchers wanted to know if exposure to highly specialized tools, as occurs with individuals in industrialized nations, affects their ability to transcend functional fixedness. It was determined that functional fixedness is experienced in both industrialized and non-industrialized cultures (German & Barrett, 2005).
DECISION MAKING
Every day you have the opportunity to make countless decisions: should you eat dessert, cheat on a test, or attend a sports event with your friends. If you reflect on your own history of choices you will realize that they vary in quality; some are rational and some are not.
In his Nobel Prize–winning work, psychologist Herbert Simon (1957; March & Simon, 1958) argued that our decisions are bounded in their rationality. According to the bounded rationality framework, human beings try to make rational decisions (such as weighing the costs and benefits of a choice) but our cognitive limitations prevent us from being fully rational. Time and cost constraints limit the quantity and quality of the information that is available to us. Moreover, we only retain a relatively small amount of information in our usable memory. And limitations on intelligence and perceptions constrain the ability of even very bright decision makers to accurately make the best choice based on the information that is available.
About 15 years after the publication of Simon’s seminal work, Tversky and Kahneman (1973, 1974; Kahneman & Tversky, 1979) produced their own Nobel Prize–winning research, which provided critical information about specific systematic and predictable biases, or mistakes, that influence judgment (Kahneman received the prize after Tversky’s death). The work of Simon, Tversky, and Kahneman paved the way to our modern understanding of judgment and decision making. And their two Nobel prizes signaled the broad acceptance of the field of behavioural decision research as a mature area of intellectual study.
ACTIVITY
Before you read any further, watch the following video clip: https://youtu.be/MYwqFIxh5bc
Rational Decision Making
Imagine that during your final year at university, you apply to a number of doctoral programs, law schools, or business schools (or another set of programs in whatever field most interests you). The good news is that you receive many acceptance letters. So, how should you decide where to go? Bazerman and Moore (2013) outline the following six steps that you should take to make a rational decision:
- Define the problem (i.e., selecting the right graduate program)
- Identify the criteria necessary to judge the multiple options (location, prestige, faculty, etc.)
- Weight the criteria (rank them in terms of importance to you)
- Generate alternatives (the schools that admitted you
- Rate each alternative on each criterion (rate each school on each criteria that you identified, and
- Compute the optimal decision
Acting rationally would require that you follow these six steps in a fully rational manner. Unfortunately, we often don’t do so, even for important decisions. Many of us rely on our intuitions far more than we should. And when we do try to think systematically, the way we enter data into such formal decision-making processes is often biased.
Fortunately, psychologists have learned a great deal about the biases that affect our thinking. This knowledge about the systematic and predictable mistakes that even the best and the brightest make can help you identify flaws in your thought processes and reach better decisions.
Biases in our Decision Process
Simon’s concept of bounded rationality taught us that judgment deviates from rationality, but it did not tell ushow judgment is biased. Tversky and Kahneman’s (1974) research helped to diagnose the specific systematic, directional biases that affect human judgment. These biases are created by the tendency to short-circuit a rational decision process by relying on a number of simplifying strategies, or rules of thumb, known as heuristics. Heuristics allow us to cope with the complex environment surrounding our decisions. Unfortunately, they also lead to systematic and predictable biases.
ACTIVITY
To highlight some of these biases please answer the following three quiz items:
Problem 1 (adapted from Alpert & Raiffa, 1969):
Listed below are 10 uncertain quantities. Do not look up any information on these items. For each, write down your best estimate of the quantity. Next, put a lower and upper bound around your estimate, such that you are 98 percent confident that your range surrounds the actual quantity. Respond to each of these items even if you admit to knowing very little about these quantities.
- The first year the Nobel Peace Prize was awarded
- The date the French celebrate “Bastille Day”
- The distance from the Earth to the Moon
- The height of the Leaning Tower of Pisa
- Number of students attending Oxford University (as of 2014)
- Number of people who have traveled to space (as of 2013)
- 2012-2013 annual budget for the University of Pennsylvania
- Average life expectancy in Bangladesh (as of 2012)
- World record for pull-ups in a 24-hour period
- Number of colleges and universities in the Boston metropolitan area
Problem 2 (adapted from Joyce & Biddle, 1981):
We know that executive fraud occurs and that it has been associated with many recent financial scandals. And, we know that many cases of management fraud go undetected even when annual audits are performed. Do you think that the incidence of significant executive-level management fraud is more than 10 in 1,000 firms (that is, 1 percent) audited by major accounting firms?
- Yes, more than 10 in 1,000 major accounting clients have significant executive-level management fraud.
- No, fewer than 10 in 1,000 major accounting clients have significant executive-level management fraud.
What is your estimate of the number of major accounting clients per 1,000 that have significant executive-level management fraud? (Write down your best estimate)
Problem 3 (adapted from Tversky & Kahneman, 1981):
Imagine that Canada is preparing for the outbreak of an unusual avian disease that is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. Assume that the exact scientific estimates of the consequences of the programs are as follows.
- Program A: If Program A is adopted, 200 people will be saved.
- Program B: If Program B is adopted, there is a one-third probability that 600 people will be saved and a two-thirds probability that no people will be saved.
Which of the two programs would you favour?
Overconfidence
On the first problem, if you set your ranges so that you were justifiably 98 percent confident, you should expect that approximately 9.8, or nine to 10, of your ranges would include the actual value. So, let’s look at the correct answers:
- 1901
- 14th of July
- 384,403 km (238,857 mi)
- 56.67 m (183 ft)
- 22,384 (as of 2014)
- 536 people (as of 2013)
- $6.007 billion
- 70.3 years (as of 2012)
- 4,321
- 52
Count the number of your 98% ranges that actually surrounded the true quantities. If you surrounded nine to 10, you were appropriately confident in your judgments. But most readers surround only between three (30%) and seven (70%) of the correct answers, despite claiming 98% confidence that each range would surround the true value. As this problem shows, humans tend to be overconfident in their judgments.
Anchoring
Regarding the second problem, people vary a great deal in their final assessment of the level of executive-level management fraud, but most think that 10 out of 1,000 is too low. When I run this exercise in class, half of the students respond to the question that I asked you to answer. The other half receive a similar problem, but instead are asked whether the correct answer is higher or lower than 200 rather than 10. Most people think that 200 is high. But, again, most people claim that this “anchor” does not affect their final estimate. Yet, on average, people who are presented with the question that focuses on the number 10 (out of 1,000) give answers that are about one-half the size of the estimates of those facing questions that use an anchor of 200. When we are making decisions, any initial anchor that we face is likely to influence our judgments, even if the anchor is arbitrary. That is, we insufficiently adjust our judgments away from the anchor.
Framing
Turning to Problem 3, most people choose Program A, which saves 200 lives for sure, over Program B. But, again, if I was in front of a classroom, only half of my students would receive this problem. The other half would have received the same set-up, but with the following two options:
- Program C: If Program C is adopted, 400 people will die.
- Program D: If Program D is adopted, there is a one-third probability that no one will die and a two-thirds probability that 600 people will die.
Which of the two programs would you favour?
Careful review of the two versions of this problem clarifies that they are objectively the same. Saving 200 people (Program A) means losing 400 people (Program C), and Programs B and D are also objectively identical. Yet, in one of the most famous problems in judgment and decision making, most individuals choose Program A in the first set and Program D in the second set (Tversky & Kahneman, 1981). People respond very differently to saving versus losing lives—even when the difference is based just on the “framing” of the choices.
The problem that I asked you to respond to was framed in terms of saving lives, and the implied reference point was the worst outcome of 600 deaths. Most of us, when we make decisions that concern gains, are risk averse; as a consequence, we lock in the possibility of saving 200 lives for sure. In the alternative version, the problem is framed in terms of losses. Now the implicit reference point is the best outcome of no deaths due to the avian disease. And in this case, most people are risk seeking when making decisions regarding losses.
These are just three of the many biases that affect even the smartest among us. Other research shows that we are biased in favour of information that is easy for our minds to retrieve, are insensitive to the importance of base rates and sample sizes when we are making inferences, assume that random events will always look random, search for information that confirms our expectations even when disconfirming information would be more informative, claim a priori knowledge that didn’t exist due to the hindsight bias, and are subject to a host of other effects that continue to be developed in the literature (Bazerman & Moore, 2013).
ACTIVITY
Take a look at these two video clips, which provide an overview of two other common heuristics, the availability heuristic and the representativeness heuristic:
Availability: https://youtu.be/rzCwS7Ea65k
Representativeness: https://youtu.be/u6dnJkbdx2M
Contemporary Developments
Bounded rationality served as the integrating concept of the field of behavioural decision research for 40 years. Then, in 2000, Thaler (2000) suggested that decision making is bounded in two ways not precisely captured by the concept of bounded rationality. First, he argued that our willpower is bounded and that, as a consequence, we give greater weight to present concerns than to future concerns. Our immediate motivations are often inconsistent with our long-term interests in a variety of ways, such as the common failure to save adequately for retirement or the difficulty many people have staying on a diet. Second, Thaler suggested that our self-interest is bounded such that we care about the outcomes of others. Sometimes we positively value the outcomes of others—giving them more of a commodity than is necessary out of a desire to be fair, for example. And, in unfortunate contexts, we sometimes are willing to forgo our own benefits out of a desire to harm others.
My colleagues and I have recently added two other important bounds to the list. Chugh, Banaji, and Bazerman (2005) and Banaji and Bhaskar (2000) introduced the concept of bounded ethicality, which refers to the notion that our ethics are limited in ways we are not even aware of ourselves. Second, Chugh and Bazerman (2007) developed the concept of bounded awareness to refer to the broad array of focusing failures that affect our judgment, specifically the many ways in which we fail to notice obvious and important information that is available to us.
A final development is the application of judgment and decision-making research to the areas of behavioural economics, behavioural finance, and behavioural marketing, among others. In each case, these fields have been transformed by applying and extending research from the judgment and decision-making literature.
Fixing Our Decisions
Ample evidence documents that even smart people are routinely impaired by biases. Early research demonstrated, unfortunately, that awareness of these problems does little to reduce bias (Fischhoff, 1982). The good news is that more recent research documents interventions that do help us overcome our faulty thinking (Bazerman & Moore, 2013).
One critical path to fixing our biases is provided in Stanovich and West’s (2000) distinction between System 1 and System 2 decision making. System 1 processing is our intuitive system, which is typically fast, automatic, effortless, implicit, and emotional. System 2 refers to decision making that is slower, conscious, effortful, explicit, and logical. The six logical steps of decision making outlined earlier describe a System 2 process.
Clearly, a complete System 2 process is not required for every decision we make. In most situations, our System 1 thinking is quite sufficient; it would be impractical, for example, to logically reason through every choice we make while shopping for groceries. But, preferably, System 2 logic should influence our most important decisions. Nonetheless, we use our System 1 processes for most decisions in life, relying on it even when making important decisions.
The key to reducing the effects of bias and improving our decisions is to transition from trusting our intuitive System 1 thinking toward engaging more in deliberative System 2 thought. Unfortunately, the busier and more rushed people are, the more they have on their minds, and the more likely they are to rely on System 1 thinking (Chugh, 2004). The frantic pace of professional life suggests that executives often rely on System 1 thinking (Chugh, 2004).
ACTIVITY
Take a look at this two-part interview with Daniel Kahneman, who sheds more light on the distinction between System 1 and System 2:
Part 1: https://youtu.be/Yb5kh6KqHfE
Part 2: https://youtu.be/TyFWoohX0bo
Fortunately, it is possible to identify conditions where we rely on intuition at our peril and substitute more deliberative thought. One fascinating example of this substitution comes from journalist Michael Lewis’ (2003) account of how Billy Beane, the general manager of the Oakland Athletics, improved the outcomes of the failing baseball team after recognizing that the intuition of baseball executives was limited and systematically biased and that their intuitions had been incorporated into important decisions in ways that created enormous mistakes. Lewis (2003) documents that baseball professionals tend to overgeneralize from their personal experiences, be overly influenced by players’ very recent performances, and overweigh what they see with their own eyes, despite the fact that players’ multiyear records provide far better data. By substituting valid predictors of future performance (System 2 thinking), the Athletics were able to outperform expectations given their very limited payroll.
ACTIVITY
Take a look at this video clip from the film Moneyball, which foreshadows the change in Beane’s strategy: https://youtu.be/TpBcwGOvO80
Another important direction for improving decisions comes from Thaler and Sunstein’s (2008) book Nudge: Improving Decisions about Health, Wealth, and Happiness. Rather than setting out to de-bias human judgment, Thaler and Sunstein outline a strategy for how “decision architects” can change environments in ways that account for human bias and trigger better decisions as a result. For example, Beshears, Choi, Laibson, and Madrian (2008) have shown that simple changes to defaults can dramatically improve people’s decisions. They tackle the failure of many people to save for retirement and show that a simple change can significantly influence enrollment in retirement savings programs. In many companies, when you start your job, you need to proactively sign up to join the company’s retirement savings plan. Many people take years before getting around to doing so. When, instead, companies automatically enroll their employees in savings programs and give them the opportunity to “opt out,” the net enrollment rate rises significantly. By changing defaults, we can counteract the human tendency to live with the status quo.
Similarly, Johnson and Goldstein’s (2003) cross-European organ donation study reveals that countries that have opt-in organ donation policies, where the default is not to harvest people’s organs without their prior consent, sacrifice thousands of lives in comparison to opt-out policies, where the default is to harvest organs. Canada and too many other countries require that citizens opt in to organ donation through a proactive effort; as a consequence, consent rates range between 4.25%–44% across these countries. In contrast, changing the decision architecture to an opt-out policy improves consent rates to 85.9% to 99.98%. Designing the donation system with knowledge of the power of defaults can dramatically change donation rates without changing the options available to citizens. In contrast, a more intuitive strategy, such as the one in place in Canada, inspires defaults that result in many unnecessary deaths.
Concluding Thoughts
Our days are filled with decisions ranging from the small (what should I wear today?) to the important (should we get married?). Many have real world consequences on our health, finances and relationships. Simon, Kahneman, and Tversky created a field that highlights the surprising and predictable deficiencies of the human mind when making decisions. As we understand more about our own biases and thinking shortcomings we can begin to take them into account or to avoid them. Only now have we reached the frontier of using this knowledge to help people make better decisions.
ACTIVITY
Click here to take a 20-item practice quiz covering many of the concepts found on this page. To begin the quiz, click the “Learn 20” button. You can return to this quiz anytime to refresh your knowledge.
Solutions to Sample Problems
A. Tuesday
B. By definition, there is no single answer to an ill-defined problem.
C. Amy and Tina and standing back to back.
D. The room with lions is the safest (if the lions haven’t eaten in three years they will be dead).
E. Die
F. X = 28
8+1 = 9
9+1+2 = 12
12+1+2+3 = 18
18+1+2+3+4 = X=28
28+1+2+3+4+5 = 43
G. Lamina is Animal backwards. Hence, the correct answer for 46251, is 15264.
H. Jail
I. Follow these steps:
- Fill the five gallon jug. Three gallon jug is empty.
- Empty three gallons from the five gallon jug into the three gallon jug.
- There remains two gallons in the five gallon jug. Empty the three gallon jug.
- Pour the two gallons into the three gallon jug.
- Fill the five gallon jug and pour one gallon from it into the three gallon jug – filling the three gallon jug.
- Four gallons remain in the five gallon jug.
J. There are several ways to solve this problem; however, a perfect solution involves just 7 moves.
References
- Alpert, M., & Raiffa, H. (1969). A progress report on the training of probability assessors. Unpublished Report.
- Aronson, E. (Ed.). (1995). Social cognition. In The social animal (p. 151). New York: W.H. Freeman and Company.
- Banaji, M. R., & Bhaskar, R. (2000). Implicit stereotypes and memory: The bounded rationality of social beliefs. In D. L. Schacter & E. Scarry (Eds.), Memory, brain, and belief (pp. 139–175). Cambridge, MA: Harvard University Press.
- Bazerman, M. H., & Moore, D. (2013). Judgment in managerial decision making (8th ed.). John Wiley & Sons Inc.
- Beshears, J., Choi, J. J., Laibson, D., & Madrian, B. C. (2008). The importance of default options for retirement saving outcomes: Evidence from the United States. In S. J. Kay & T. Sinha (Eds.), Lessons from pension reform in the Americas (pp. 59–87). Oxford: Oxford University Press.
- Chugh, D. (2004). Societal and managerial implications of implicit social cognition: Why milliseconds matter.Social Justice Research, 17(2), 203–222.
- Chugh, D., & Bazerman, M. H. (2007). Bounded awareness: What you fail to see can hurt you. Mind & Society, 6(1), 1–18.
- Chugh, D., Banaji, M. R., & Bazerman, M. H. (2005). Bounded ethicality as a psychological barrier to recognizing conflicts of interest. In D. Moore, D. M. Cain, G. Loewenstein, & M. H. Bazerman (Eds.),Conflicts of Interest (pp. 74–95). New York, NY: Cambridge University Press.
- Fischhoff, B. (1982). Debiasing. In D. Kahneman, P. Slovic, & A. Tversky (Eds.), Judgment under uncertainty: Heuristics and biases (pp. 422–444). New York, NY: Cambridge University Press.
- German, T. P., & Barrett, H. C. (2005). Functional fixedness in a technologically sparse culture. Psychological Science, 16, 1–5.
- Johnson, E. J., & Goldstein, D. (2003). Do defaults save lives? Science 302(5649), 1338–1339.
- Joyce, E. J., & Biddle, G. C. (1981). Are auditors’ judgments sufficiently regressive? Journal of Accounting Research, 19(2), 323–349.
- Kahneman, D. (2011). Thinking, fast and slow. New York: Farrar, Straus, and Giroux.
- Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decision under risk. Econometrica, 47(2), 263–292.
- Lewis, M. (2003). Moneyball: The art of winning an unfair game. New York, NY: W.W. Norton & Company Ltd.
- March, J. G., & Simon, H. A. (1958). Organizations. Oxford: Wiley.
- Pratkanis, A. (1989). The cognitive representation of attitudes. In A. R. Pratkanis, S. J. Breckler, & A. G. Greenwald (Eds.), Attitude structure and function (pp. 71–98). Hillsdale, NJ: Erlbaum.
- Simon, H. A. (1957). Models of man, social and rational: Mathematical essays on rational human behaviour in a social setting. New York, NY: John Wiley & Sons.
- Stanovich, K. E., & West, R. F. (2000). Individual differences in reasoning: Implications for the rationality debate? Behavioral and Brain Sciences, 23, 645–726.
- Thaler, R. H. (2000). From homo economicus to homo sapiens. Journal of Economics Perspectives, 14, 133–141.
- Thaler, R. H., & Sunstein, C. R. (2008). Nudge: Improving decisions about health, wealth, and happiness. New Haven, CT: Yale University Press.
- Tversky, A., & Kahneman, D. (1981). The framing of decisions and the psychology of choice. Science, New Series, 211(4481), 453–458.
- Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, New Series, 185(4157), 1124–1131.
- Tversky, A., & Kahneman, D. (1973). Availability: A heuristic for judging frequency and probability. Cognitive Psychology, 5(2), 207–232.
Attributions
- Problem Solving by OpenStax is licensed under a Creative Commons Attribution License 4.0 license. Download for free at http://cnx.org/contents/4abf04bf-93a0-45c3-9cbc-2cefd46e68cc@5.46. Revised by Rajiv Jhangiani
- Judgment and Decision Making by Max H. Bazerman is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License Revised by Rajiv Jhangiani
- Problem Solving by: Boundless. Boundless Psychology. Boundless, 26 May. 2016. Retrieved 19 Jun. 2016 from https://www.boundless.com/psychology/textbooks/boundless-psychology-textbook/cognition-9/problem-solving-490/problem-solving-497-16755/
- Challenge Your Creativity: 77 Problem Solving Exercises. Located at: http://dudye.com/challenge-your-creativity-77-problem-solving-exercises
- The Die Hard 3 Problem. Located at: http://www.math.tamu.edu/~dallen/hollywood/diehard/diehard.htm
- Tower of Hanoi. Authored by Mathisfun.com. Located at: https://www.mathsisfun.com/games/towerofhanoi.html. All Rights Reserved.
- Let’s build a filter. Authored by: Benjamin Ragheb. Located at: https://youtu.be/Z3csfLkMJT4. License: All Rights Reserved. License Terms: Standard YouTube license
- Episode 4 − Intuition and Rationality: Some photos. Authored by: Think101. Located at: https://youtu.be/MYwqFIxh5bc. License: All Rights Reserved. License Terms: Standard YouTube license
- Episode 4 − Intuition and Rationality: Availability. Authored by: Think101. Located at: https://youtu.be/rzCwS7Ea65k. License: All Rights Reserved. License Terms: Creative Commons Attribution license (reuse allowed)
- Episode 4 − Intuition and Rationality: Linda the feminist bank teller. Authored by: Think101. Located at: https://youtu.be/u6dnJkbdx2M. License: All Rights Reserved. License Terms: Creative Commons Attribution license (reuse allowed)
- Episode 4 − Intuition and Rationality: Conversation with Daniel Kahneman (Part 1). Authored by: Think101. Located at: https://youtu.be/Yb5kh6KqHfE. License: All Rights Reserved. License Terms: Creative Commons Attribution license (reuse allowed)
- Episode 4 − Intuition and Rationality: Conversation with Daniel Kahneman (Part 2). Authored by: Think101. Located at: https://youtu.be/TyFWoohX0bo. License: All Rights Reserved. License Terms: Creative Commons Attribution license (reuse allowed)
- Moneyball 2011, First pivotal scene – Peter Grant elaborates on baseball’s medieval thinking. Authored by: Jie Wang. Located at: https://youtu.be/TpBcwGOvO80. License: All Rights Reserved. License Terms: Standard YouTube license
- Noba Psychology – Judgment and Decision-Making (Cerego Quiz). Authored by Peter Lindberg. Located at: https://cerego.com/sets/749108/learn