6 CHAPTER 6

Fallacies

A fallacy is simply a mistake in reasoning. Some fallacies are formal, and some are informal. We will discuss formal fallacies in this chapter and information fallacies in the next. From a practical standpoint, these are probably the most practical of the chapters in the course. Learning how to recognize fallacies can be an extremely valuable “shield” against false arguments.

6.1. Formal Fallacies

Watch and Learn

To learn more, watch The Critical Thinking Academy’s video, What is a Fallacy?

 

In previous chapters, we have discovered what we can determine whether an argument was valid or invalid without even having to know or understand what the argument was about. We saw that we could define certain valid rules of inference, such as modus ponens and modus tollens. These inference patterns are valid in virtue of their form, not their content. That is, any argument that has the same form as modus ponens or modus tollens will automatically be valid.

This helps us understand what philosophers mean when they talk about formal fallacies. A formal fallacy is an argument whose form is invalid. Thus, any argument that has an invalid form will automatically be invalid, regardless of the meaning of the sentences. Two formal fallacies that are similar to, but should never be confused with, modus ponens and modus tollens are denying the antecedent and affirming the consequent. Here are the forms of those invalid inferences:

Denying the antecedent

p ⊃ q

~p

∴ ~q

Affirming the consequent

p ⊃ q

q

∴ p

Any argument that has either of these forms is an invalid argument. For example:

If Kant was a deontologist, then he was a non-consequentialist.

Kant was not a deontologist.

Therefore, Kant was a not a non-consequentialist.

The form of this argument is:

D ⊃ C

~D

∴ ~C

As you can see, this argument has the form of the fallacy, denying the antecedent. Thus, we know that this argument is invalid even if we do not know what “Kant” or “deontologist” or “non-consequentialist” means. (“Kant” was a famous German philosopher from the early 1800s, whereas “deontology” and “non-consequentialist” are terms that come from ethical theory.) It is mark of a formal fallacy that we can identify it even if we do not really understand the meanings of the sentences in the argument. Recall our Jabberwocky argument from chapter 2. Here is an argument which uses silly, made-up words from Lewis Carrol’s “Jabberwocky.” See if you can determine whether the argument’s form is valid or invalid:

If toves are brillig then toves are slithy.

Toves are slithy

Therefore, toves are brillig.

You should be able to see that this argument has the form of affirming the consequent:

B ⊃ S

S

∴ B

As such, we know that the argument is invalid, even though we haven’t got a clue what “toves” are or what “slithy” or “brillig” means. The point is that we can identify formal fallacies without having to know what they mean.


6.2 Informal Fallacies

In contrast to formal fallacies which focus on the structure of an argument, informal fallacies are those which cannot be identified without understanding the concepts involved in the argument. In this section, we will discuss some of the most important informal fallacies to be on guard against!

Watch and Learn

To learn more about Formal and Informal Fallacies, watch A Crash Course in Formal Logic Pt 4a: Formal

and Informal Fallacies

Composition Fallacy

A paradigm example of an informal fallacy is the fallacy of composition. Consider the following argument:

Each member on the gymnastics team weighs less than 110 lbs. Therefore, the whole gymnastics team weighs less than 110 lbs.

This argument commits the composition fallacy. In the composition fallacy one argues that since each part of the whole has a certain feature, it follows that the whole has that same feature. However, you cannot generally identify any argument that moves from statements about parts to statements about wholes as committing the composition fallacy because whether or not there is a fallacy depends on what feature we are attributing to the parts and wholes. Here is an example of an argument that moves from claims about the parts possessing a feature to a claim about the whole possessing that same feature, but does not commit the composition fallacy:

Every part of the car is made of plastic. Therefore, the whole car is made of plastic.

This conclusion does follow from the premises; there is no fallacy here. The difference between this argument and the preceding argument (about the gymnastics team) is not their form. In fact, both arguments have the same form:

Every part of X has the feature f. Therefore, the whole X has the feature f.

And yet one of the arguments is clearly fallacious, while the other is not. The difference between the two arguments is not their form, but their content. That is, the difference is what feature is being attributed to the parts and wholes. Some features (like weighing a certain amount) are such that if they belong to each part, then it does not follow that they belong to the whole. Other features (such as being made of plastic) are such that if they belong to each part, it follows that they belong to the whole.

Here is another example:

Every member of the team has been to Paris. Therefore, the team has been to Paris.

The conclusion of this argument does not follow. Just because each member of the team has been to Paris, it does not follow that the whole team has been to Paris, since it may not have been the case that each individual was there at the same time and was there in their capacity as a member of the team. Thus, even though it is plausible to say that the team is composed of every member of the team, it does not follow that since every member of the team has been to Paris, the whole team has been to Paris. Contrast that example with this one:

Every member of the team was on the plane. Therefore, the whole team was on the plane.

This argument, in contrast to the last one, contains no fallacy. It is true that if every member is on the plane, then the whole team is on the plane. And yet these two arguments have almost exactly the same form. The only difference is that the first argument is talking about the property, having been to Paris, whereas the second argument is talking about the property, being on the plane. The only reason we are able to identify the first argument as committing the composition fallacy and the second argument as not committing a fallacy is that we understand the relationship between the concepts involved. In the first case, we understand that it is possible that every member could have been to Paris without the team ever having been; in the second case we understand that as long as every member of the team is on the plane, it has to be true that the whole team is on the plane. The take home point here is that in order to identify whether an argument has committed the composition fallacy, one must understand the concepts involved in the argument. This is the mark of an informal fallacy: we have to rely on our understanding of the meanings of the words or concepts involved, rather than simply being able to identify the fallacy from its form.

Division fallacy

The division fallacy is like the composition fallacy and they are easy to confuse. The difference is that the division fallacy argues that since the whole has some feature, each part must also have that feature. The composition fallacy, as we have just seen, goes in the opposite direction: since each part has some feature, the whole must have that same feature. Here is an example of a division fallacy:

The house costs 1 million dollars. Therefore, each part of the house costs 1 million dollars.

This is clearly a fallacy. Just because the whole house costs 1 million dollars, it does not follow that each part of the house costs 1 million dollars. However, here is an argument that has the same form, but that does not commit the division fallacy:

The whole team died in the plane crash. Therefore, each individual on the team died in the plane crash.

In this example, since we seem to be referring to one plane crash in which all the members of the team died (“the” plane crash), it follows that if the whole team died in the crash, then every individual on the team died in the crash. So, this argument does not commit the division fallacy. In contrast, the following argument has exactly the same form, but does commit the division fallacy:

The team played its worst game ever tonight. Therefore, each individual on the team played their worst game ever tonight.

It can be true that the whole team played its worst game ever even if it is true that no individual on the team played their worst game ever. Thus, this argument does commit the fallacy of division even though it has the same form as the previous argument, which does not commit the fallacy of division. This shows (again) that in order to identify informal fallacies (like composition and division), we must rely on our understanding of the concepts involved in the argument. Some concepts (like “team” and “dying in a plane crash”) are such that if they apply to the whole, they also apply to all the parts. Other concepts (like “team” and “worst game played”) are such that they can apply to the whole even if they do not apply to all the parts.

Begging the Question Fallacy

Consider the following argument:

Capital punishment is justified for crimes such as rape and murder because it is quite legitimate and appropriate for the state to put to death someone who has committed such heinous and inhuman acts.

The premise indicator, “because” denotes the premise and (derivatively) the conclusion of this argument. In standard form, the argument is this:

It is legitimate and appropriate for the state to put to death someone who commits rape or murder.

Therefore, capital punishment is justified for crimes such as rape and murder.

You should notice something peculiar about this argument: the premise is essentially the same claim as the conclusion. The only difference is that the premise spells out what capital punishment means (the state putting criminals to death) whereas the conclusion just refers to capital punishment by name, and the premise uses terms like “legitimate” and “appropriate” whereas the conclusion uses the related term, “justified.” But these differences do not add up to any real differences in meaning. Thus, the premise is essentially saying the same thing as the conclusion. This is a problem: we want our premise to provide a reason for accepting the conclusion. But if the premise is the same claim as the conclusion, then it cannot possibly provide a reason for accepting the conclusion! Begging the question occurs when one (either explicitly or implicitly) assumes the truth of the conclusion in one or more of the premises. Begging the question is thus a kind of circular reasoning.

One interesting feature of this fallacy is that formally there is nothing wrong with arguments of this form. Here is what I mean. Consider an argument that explicitly commits the fallacy of begging the question. For example,

Capital punishment is morally permissible

Therefore, capital punishment is morally permissible

Now, apply any method of assessing validity to this argument and you will see that it is valid by any method. If we use the informal test (by trying to imagine that the premises are true while the conclusion is false), then the argument passes the test, since any time the premise is true, the conclusion will have to be true as well (since it is the exact same statement). Likewise, the argument is valid by our formal test of validity, truth tables. But while this argument is technically valid, it is still a really bad argument. Why? Because the point of giving an argument in the first place is to provide some reason for thinking the conclusion is true for those who do not already accept the conclusion. But if one does not already accept the conclusion, then simply restating the conclusion in a different way is not going to convince them. Rather, a good argument will provide some reason for accepting the conclusion that is sufficiently independent of that conclusion itself. Begging the question utterly fails to do this and this is why it counts as an informal fallacy. What is interesting about begging the question is that there is absolutely nothing wrong with the argument formally.

Whether or not an argument begs the question is not always an easy matter to sort out. As with all informal fallacies, detecting it requires a careful understanding of the meaning of the statements involved in the argument. Here is an example of an argument where it is not as clear whether there is a fallacy of begging the question:

Christian belief is warranted because according to Christianity there exists a being called “the Holy Spirit” which reliably guides Christians towards the truth regarding the central claims of Christianity.

One might think that there is a kind of circularity (or begging the question) involved in this argument since the argument appears to assume the truth of Christianity in justifying the claim that Christianity is true. But whether or not this argument really does beg the question is something on which there is much debate within the sub-field of philosophy called epistemology (“study of knowledge”). The philosopher Alvin Plantinga argues persuasively that the argument does not beg the question, but being able to assess that argument takes patient years of study in the field of epistemology (not to mention a careful engagement with Plantinga’s work). As this example illustrates, the issue of whether an argument begs the question requires us to draw on our general knowledge of the world. This is the mark of an informal, rather than formal, fallacy.


False Dichotomy Fallacy

Suppose I argued as follows:

Raising taxes on the wealthy will either hurt the economy or it will help it. But it will not help the economy. Therefore, it will hurt the economy.

The standard form of this argument is:

Either raising taxes on the wealthy will hurt the economy or it will help it.

Raising taxes on the wealthy will not help the economy.

Therefore, raising taxes on the wealthy will hurt the economy.

This argument contains a fallacy called a “false dichotomy.” A false dichotomy is simply a disjunction that does not exhaust all of the possible options. In this case, the problematic disjunction is the first premise: either raising the taxes on the wealthy will hurt the economy or it will help it. But these are not the only options. Another option is that raising taxes on the wealthy will have no effect on the economy. Notice that the argument above has the form of a disjunctive syllogism:

A v B

~A

image B

However, since the first premise presents two options as if they were the only two options, when in fact they are not, the first premise is false, and the argument fails. Notice that the form of the argument is perfectly good—the argument is valid. The problem is that this argument is not sound because the first premise of the argument commits the false dichotomy fallacy. False dichotomies are commonly encountered in the context of a disjunctive syllogism or constructive dilemma (see chapter 2).

In a speech made on April 5, 2004, President Bush made the following remarks about the causes of the Iraq war:

Saddam Hussein once again defied the demands of the world. And so I had a choice: Do I take the word of a madman, do I trust a person who had used weapons of mass destruction on his own people, plus people in the neighborhood, or do I take the steps necessary to defend the country? Given that choice, I will defend America every time.

The false dichotomy here is the claim that:

Either I trust the word of a madman or I defend America (by going to war against Saddam Hussein’s regime).

The problem is that these are not the only options. Other options include ongoing diplomacy and economic sanctions. Thus, even if it true that Bush should not have trusted the word of Hussein, it does not follow that the only other option is going to war against Hussein’s regime. (Furthermore, it is not clear in what sense this was needed to defend America.) That is a false dichotomy.

As with all the previous informal fallacies we have considered, the false dichotomy fallacy requires an understanding of the concepts involved. Thus, we have to use our understanding of world in order to assess whether a false dichotomy fallacy is being committed or not.

Equivocation Fallacy

Consider the following argument:

Children are a headache. Aspirin will make headaches go away. Therefore, aspirin will make children go away.

This is a silly argument, but it illustrates the fallacy of equivocation. The problem is that the word “headache” is used equivocally—that is, in two different senses. In the first premise, “headache” is used figuratively, whereas in the second premise “headache” is used literally. The argument is only successful if the meaning of “headache” is the same in both premises. But it is not and this is what makes this argument an instance of the fallacy of equivocation.

Here is another example:

Taking a logic class helps you learn how to argue. But there is already too much hostility in the world today, and the fewer arguments the better. Therefore, you should not take a logic class.

In this example, the word “argue” and “argument” are used equivocally. Hopefully, at this point in the text, you recognize the difference. (If not, go back and reread section 1.1.)

The fallacy of equivocation is not always so easy to spot. Here is a trickier example:

The existence of laws depends on the existence of intelligent beings like humans who create the laws. However, some laws existed before there were any humans (e.g., laws of physics). Therefore, there must be some non-human, intelligent being that created these laws of nature.

The term “law” is used equivocally here. In the first premise it is used to refer to societal laws, such as criminal law; in the second premise it is used to refer to laws of nature. Although we use the term “law” to apply to both cases, they are importantly different. Societal laws, such as the criminal law of a society, are enforced by people and there are punishments for breaking the laws. Natural laws, such as laws of physics, cannot be broken and thus there are no punishments for breaking them. (Does it make sense to scold the electron for not doing what the law says it will do?)

As with every informal fallacy we have examined in this section, equivocation can only be identified by understanding the meanings of the words involved. In fact, the definition of the fallacy of equivocation refers to this very fact: the same word is being used in two different senses (i.e., with two different meanings). So, unlike formal fallacies, identifying the fallacy of equivocation requires that we draw on our understanding of the meaning of words and of our understanding of the world, generally.

Slippery Slope Fallacies

Slippery slope fallacies depend on the concept of vagueness. When a concept or claim is vague, it means that we do not know precisely what claim is being made, or what the boundaries of the concept are. The classic example used to illustrate vagueness is the “sorites paradox.” The term “sorites” is the Greek term for “heap” and the paradox comes from ancient Greek philosophy. Here is the paradox. I will give you two claims that each sound very plausible, but in fact lead to a paradox. Here are the two claims:

One grain of sand is not a heap of sand.

If I start with something that is not a heap of sand, then adding one grain of sand to that will not create a heap of sand.

For example, two grains of sand are not a heap, thus (by the second claim) neither are three grains of sand.

But since three grains of sand is not a heap then (by the second claim again) neither is four grains of sand. You can probably see where this is going. By continuing to add one grain of sand over and over, I will eventually end up with something that is clearly a heap of sand, but that will not be counted as a heap of sand if we accept both claims 1 and 2 above.

Philosophers continue to argue and debate about how to resolve the sorites paradox, but the point for us is just to illustrate the concept of vagueness. The concept “heap” is a vague concept in this example. But so are so many other concepts, such a color concepts (red, yellow, green, etc.), moral concepts (right, wrong, good, bad), and just about any other concept you can think of. The one domain that seems to be unaffected by vagueness is mathematical and logical concepts. There are two fallacies related to vagueness: the causal slippery slope and the conceptual slippery slope. We will cover the conceptual slippery slope first since it relates most closely to the concept of vagueness I have explained above.

It may be true that there is no essential difference between 499 grains of sand and 500 grains of sand. But even if that is so, it does not follow that there is no difference between 1 grain of sand and 5 billion grains of sand. In general, just because we cannot draw a distinction between A and B, and we cannot draw a distinction between B and C, it does not mean we cannot draw a distinction between A and C. Here is an example of a conceptual slippery slope fallacy.

It is illegal for anyone under 21 to drink alcohol. But there is no difference between someone who is 21 and someone who is 20 years 11 months old. So, there is nothing wrong with someone who is 20 years and 11 months old drinking. But since there is no real distinction between being one month older and one month younger, there should not be anything wrong with drinking at any age. Therefore, there is nothing wrong with allowing a 10-year-old to drink alcohol.

Imagine the life of an individual in stages of 1-month intervals. Even if it is true that there is no distinction in kind between any one of those stages, it does not follow that there is not a distinction to be drawn at the extremes of either end. Clearly there is a difference between a 5-year-old and a 25-year-old—a distinction in kind that is relevant to whether they should be allowed to drink alcohol. The conceptual slippery slope fallacy assumes that because we cannot draw a distinction between adjacent stages, we cannot draw a distinction at all between any stages. One clear way of illustrating this is with color. Think of a color spectrum from purple to red to orange to yellow to green to blue. Each color grades into the next without there being any distinguishable boundaries between the colors—a continuous spectrum. Even if it is true that for any two adjacent hues on the color wheel, we cannot distinguish between the two, it does not follow from this that there is no distinction to be drawn between any two portions of the color wheel, because then we would be committed to saying that there is no distinguishable difference between purple and yellow! The example of the color spectrum illustrates the general point that just because the boundaries between very similar things on a spectrum are vague, it does not follow that there are no differences between any two things on that spectrum.

Whether or not one will identify an argument as committing a conceptual slippery slope fallacy, depends on the other things one believes about the world. Thus, whether or not a conceptual slippery slope fallacy has been committed will often be a matter of some debate. It will itself be vague. Here is a good example that illustrates this point.

People are found not guilty by reason of insanity when they cannot avoid breaking the law. But people who are brought up in certain deprived social circumstances are not much more able than the legally insane to avoid breaking the law. So, we should not find such individuals guilty any more than those who are legally insane.

Whether there is conceptual slippery slope fallacy here depends on what you think about a host of other things, including individual responsibility, free will, the psychological and social effects of deprived social circumstances such as poverty, lack of opportunity, abuse, etc. Some people may think that there are big differences between those who are legally insane and those who grow up in deprived social circumstances. Others may not think the differences are so great. The issues here are subtle, sensitive, and complex, which is why it is difficult to determine whether there is any fallacy here or not. If the differences between those who are insane and those who are the product of deprived social circumstances turn out to be like the differences between one shade of yellow and an adjacent shade of yellow, then there is no fallacy here. But if the differences turn out to be analogous to those between yellow and green (i.e., with many distinguishable stages of difference between) then there would indeed be a conceptual slippery slope fallacy here. The difficulty of distinguishing instances of the conceptual slippery slope fallacy, and the fact that distinguishing it requires us to draw on our knowledge about the world, shows that the conceptual slippery slope fallacy is an informal fallacy.

The causal slippery slope fallacy is committed when one event is said to lead to some other (usually disastrous) event via a chain of intermediary events. If you have ever seen Direct TV’s “get rid of cable” commercials, you will know exactly what I am talking about. (If you do not know what I am talking about you should Google it right now and find out. They are quite funny.) Here is an example of a causal slippery slope fallacy (it is adapted from one of the Direct TV commercials):

If you use cable, your cable will probably go on the fritz. If your cable is on the fritz, you will probably get frustrated. When you get frustrated you will probably hit the table. When you hit the table, your young daughter will probably imitate you. When your daughter imitates you, she will probably get thrown out of school. When she gets thrown out of school, she will probably meet undesirables. When she meets undesirables, she will probably marry undesirables. When she marries undesirables, you will probably have a grandson with a dog collar. Therefore, if you use cable, you will probably have a grandson with dog collar.

This example is silly and absurd, yes. But it illustrates the causal slippery slope fallacy. Slippery slope fallacies are always made up of a series of conjunctions of probabilistic conditional statements that link the first event to the last event. A causal slippery slope fallacy is committed when one assumes that just because each individual conditional statement is probable, the conditional that links the first event to the last event is also probable. Even if we grant that each “link” in the chain is individually probable, it does not follow that the whole chain (or the conditional that links the first event to the last event) is probable. Suppose, for the sake of the argument, we assign probabilities to each “link” or conditional statement, like this. (I have italicized the consequents of the conditionals and assigned high conditional probabilities to them. The high probability is for the sake of the argument; I do not actually think these things are as probable as I have assumed here.)

If you use cable, then your cable will probably go on the fritz (.9)

If your cable is on the fritz, then you will probably get angry (.9)

If you get angry, then you will probably hit the table (.9)

If you hit the table, your daughter will probably imitate you (.8)

If your daughter imitates you, she will probably be kicked out of school (.8)

If she is kicked out of school, she will probably meet undesirables (.9)

If she meets undesirables, she will probably marry undesirables (.8)

If she marries undesirables, you will probably have a grandson with a dog collar (.8)

However, even if we grant the probabilities of each link in the chain is high (80-90% probable), the conclusion does not even reach a probability higher than chance. Recall that in order to figure the probability of a conjunction, we must multiply the probability of each conjunct:

(.9) × (.9) × (.9) × (.8) × (.8) × (.9) × (.8) × (.8) = .27

That means the probability of the conclusion (i.e., that if you use cable, you will have a grandson with a dog collar) is only 27%, despite the fact that each conditional has a relatively high probability! The causal slippery slope fallacy is actually a formal probabilistic fallacy and so could have been discussed in chapter 3 with the other formal probabilistic fallacies. What makes it a formal rather than informal fallacy is that we can identify it without even having to know what the sentences of the argument mean. I could just have easily written out a nonsense argument comprised of series of probabilistic conditional statements. But I would still have been able to identify the causal slippery slope fallacy because I would have seen that there was a series of probabilistic conditional statements leading to a claim that the conclusion of the series was also probable. That is enough to tell me that there is a causal slippery slope fallacy, even if I do not really understand the meanings of the conditional statements.

It is helpful to contrast the causal slippery slope fallacy with the valid form of inference, hypothetical syllogism. Recall that a hypothetical syllogism has the following kind of form:

A ⊃ B

B ⊃ C

C ⊃ D

D ⊃ E

∴ A ⊃ E

The only difference between this and the causal slippery slope fallacy is that whereas in the hypothetical syllogism, the link between each component is certain, in a causal slippery slope fallacy, the link between each event is probabilistic. It is the fact that each link is probabilistic that accounts for the fallacy. One way of putting this is point is that probability is not transitive. Just because A makes B probable and B makes C probable and C makes X probable, it does not follow that A makes X probable. In contrast, when the links are certain rather than probable, then if A always leads to B and B always leads to C and C always leads to X, then it has to be the case that A always leads to X.


Fallacies of Relevance

What all fallacies of relevance have in common is that they make an argument or response to an argument that is irrelevant to that argument. Fallacies of relevance can be psychologically compelling, but it is important to distinguish between rhetorical techniques that are psychologically compelling, on the one hand, and rationally compelling arguments, on the other. What makes something a fallacy is that it fails to be rationally compelling once we have carefully considered it. That said, arguments that fail to be rationally compelling may still be psychologically or emotionally compelling. The first fallacy of relevance that we will consider, the ad hominem fallacy, is an excellent example of a fallacy that can be psychologically compelling.

Ad hominem

“Ad hominem” is a Latin phrase that can be translated into English as the phrase, “against the man.” In an ad hominem fallacy, instead of responding to (or attacking) the argument a person has made, one attacks the person him or herself. In short, one attacks the person making the argument rather than the argument itself. Here is an anecdote that reveals an ad hominem fallacy (and that has actually occurred in my ethics class before).

A philosopher named Peter Singer had made an argument that it is morally wrong to spend money on luxuries for oneself rather than give all of your money that you do not strictly need away to charity. The argument is actually an argument from analogy (whose details I discussed in section 3.3), but the essence of the argument is that there are every day in this world children who die preventable deaths, and there are charities who could save the lives of these children if they are funded by individuals from wealthy countries like our own. Since there are things that we all regularly buy that we do not need (e.g., Starbuck’s lattes, beer, movie tickets, or extra clothes or shoes we do not really need), if we continue to purchase those things rather than using that money to save the lives of children, then we are essentially contributing to the deaths of those children if we choose to continue to live our lifestyle of buying things we do not need, rather than donating the money to a charity that will save lives of children in need. In response to Singer’s argument, one student in the class asked: “Does Peter Singer give his money to charity? Does he do what he says we are all morally required to do?”

The implication of this student’s question (which I confirmed by following up with her) was that if Peter Singer himself does not donate all his extra money to charities, then his argument is not any good and can be dismissed. But that would be to commit an ad hominem fallacy. Instead of responding to the argument that Singer had made, this student attacked Singer himself. That is, they wanted to know how Singer lived and whether he was a hypocrite or not. Was he the kind of person who would tell us all that we had to live a certain way but fail to live that way himself? But all of this is irrelevant to assessing Singer’s argument. Suppose that Singer did not donate his excess money to charity and instead spent it on luxurious things for himself. Still, the argument that Singer has given can be assessed on its own merits. Even if it were true that Peter Singer was a total hypocrite, his argument may nevertheless be rationally compelling. And it is the quality of the argument that we are interested in, not Peter Singer’s personal life and whether or not he is hypocritical. Whether Singer is or is not a hypocrite, is irrelevant to whether the argument he has put forward is strong or weak, valid, or invalid. The argument stands on its own and it is that argument rather than Peter Singer himself that we need to assess.

Nonetheless, there is something psychologically compelling about the question: Does Peter Singer practice what he preaches? I think what makes this question seem compelling is that humans are very interested in finding “cheaters” or hypocrites—those who say one thing and then do another. Evolutionarily, our concern with cheaters makes sense because cheaters cannot be trusted, and it is essential for us (as a group) to be able to pick out those who cannot be trusted. That said, whether or not a person giving an argument is a hypocrite is irrelevant to whether that person’s argument is good or bad. So, there may be psychological reasons why humans are prone to find certain kinds of ad hominem fallacies psychologically compelling, even though ad hominem fallacies are not rationally compelling.

Not every instance in which someone attacks a person’s character is an ad hominem fallacy. Suppose a witness is on the stand testifying against a defendant in a court of law. When the witness is cross examined by the defense lawyer, the defense lawyer tries to go for the witness’s credibility, perhaps by digging up things about the witness’s past. For example, the defense lawyer may find out that the witness cheated on her taxes five years ago or that the witness failed to pay her parking tickets. The reason this is not an ad hominem fallacy is that in this case the lawyer is trying to establish whether what the witness is saying is true or false and in order to determine that we have to know whether the witness is trustworthy. These facts about the witness’s past may be relevant to determining whether we can trust the witness’s word. In this case, the witness is making claims that are either true or false rather than giving an argument. In contrast, when we are assessing someone’s argument, the argument stands on its own in a way the witness’s testimony does not. In assessing an argument, we want to know whether the argument is strong or weak and we can evaluate the argument using the logical techniques surveyed in this text. In contrast, when a witness is giving testimony, they are not trying to argue anything. Rather, they are simply making a claim about what did or did not happen. So, although it may seem that a lawyer is committing an ad hominem fallacy in bringing up things about the witness’s past, these things are actually relevant to establishing the witness’s credibility. In contrast, when considering an argument that has been given, we do not have to establish the arguer’s credibility because we can assess the argument they have given on its own merits. The arguer’s personal life is irrelevant.

Straw man

Suppose that my opponent has argued for a position, call it position A, and in response to his argument, I give a rationally compelling argument against position B, which is related to position A, but is much less plausible (and thus much easier to refute). What I have just done is attacked a straw man—a position that “looks like” the target position but is actually not that position. When one attacks a straw man, one commits the straw man fallacy. The straw man fallacy misrepresents one’s opponent’s argument and is thus a kind of irrelevance. Here is an example.

Two candidates for political office in Colorado, Tom and Fred, are having an exchange in a debate in which Tom has laid out his plan for putting more money into health care and education and Fred has laid out his plan which includes earmarking more state money for building more prisons which will create more jobs and, thus, strengthen Colorado’s economy. Fred responds to Tom’s argument that we need to increase funding to health care and education as follows: “I am surprised, Tom, that you are willing to put our state’s economic future at risk by sinking money into these programs that do not help to create jobs. You see, folks, Tom’s plan will risk sending our economy into a tailspin, risking harm to thousands of Coloradans. On the other hand, my plan supports a healthy and strong Colorado and would never bet our state’s economic security on idealistic notions that simply do not work when the rubber meets the road.”

Fred has committed the straw man fallacy. Just because Tom wants to increase funding to health care and education does not mean he does not want to help the economy. Furthermore, increasing funding to health care and education does not entail that fewer jobs will be created. Fred has attacked a position that is not the position that Tom holds, but is in fact a much less plausible, easier to refute position. However, it would be silly for any political candidate to run on a platform that included “harming the economy.” Presumably, no political candidate would run on such a platform. Nonetheless, this exact kind of straw man is ubiquitous in political discourse in our country.

Here is another example.

Nancy has just argued that we should provide middle schoolers with sex education classes, including how to use contraceptives so that they can practice safe sex should they end up in the situation where they are having sex. Fran responds: “proponents of sex education try to encourage our children to a sex-with-no-strings-attached mentality, which is harmful to our children and to our society.”

Fran has committed the straw man (or straw woman) fallacy by misrepresenting Nancy’s position. Nancy’s position is not that we should encourage children to have sex, but that we should make sure that they are fully informed about sex so that if they do have sex, they go into it at least a little less blindly and are able to make better decision regarding sex.

As with other fallacies of relevance, straw man fallacies can be compelling on some level, even though they are irrelevant. It may be that part of the reason we are taken in by straw man fallacies is that humans are prone to “demonize” the “other”—including those who hold a moral or political position different from our own. It is easy to think bad things about those with whom we do not regularly interact. And it is easy to forget that people who are different than us are still people just like us in all the important respects. Many years ago, atheists were commonly thought of as highly immoral people and stories about the horrible things that atheists did in secret circulated widely. People believed that these strange “others” were capable of the most horrible savagery. After all, they may have reasoned, if you do not believe there is a God holding us accountable, why be moral? The Jewish philosopher, Baruch Spinoza, was an atheist who lived in the Netherlands in the 17th century. He was accused of all sorts of things that were commonly believed about atheists. But he was in fact as upstanding and moral as any person you could imagine. The people who knew Spinoza knew better, but how could so many people be so wrong about Spinoza? I suspect that part of the reason is that since at that time there were very few atheists (or at least very few people actually admitted to it), very few people ever knowingly encountered an atheist. Because of this, the stories about atheists could proliferate without being put in check by the facts. I suspect the same kind of phenomenon explains why certain kinds of straw man fallacies proliferate. If you are a conservative and mostly only interact with other conservatives, you might be prone to holding lots of false beliefs about liberals. And so maybe you are less prone to notice straw man fallacies targeted at liberals because the false beliefs you hold about them incline you to see the straw man fallacies as true.

Tu quoque (Appeal to Hypocrisy)

Tu quoque” is a Latin phrase that can be translated into English as “you too” or “you, also.” The tu quoque fallacy is a way of avoiding answering a criticism by bringing up a criticism of your opponent rather than answer the criticism. For example, suppose that two political candidates, A and B, are discussing their policies and A brings up a criticism of B’s policy. In response, B brings up her own criticism of A’s policy rather than respond to A’s criticism of her policy. B has here committed the tu quoque fallacy. The fallacy is best understood as a way of avoiding having to answer a tough criticism that one may not have a good answer to. This kind of thing happens all the time in political discourse. Tu quoque, as I have presented it, is fallacious when the criticism one raises is simply in order to avoid having to answer a difficult objection to one’s argument or view. However, there are circumstances in which a tu quoque kind of response is not fallacious. If the criticism that A brings toward B is a criticism that equally applies not only to A’s position but to any position, then B is right to point this fact out. For example, suppose that A criticizes B for taking money from special interest groups. In this case, B would be totally right (and there would be no tu quoque fallacy committed) to respond that not only does A take money from special interest groups, but every political candidate running for office does. That is just a fact of life in American politics today. So, A really has no criticism at all to B since everyone does what B is doing and it is in many ways unavoidable. Thus, B could (and should) respond with a “you too” rebuttal and in this case that rebuttal is not a tu quoque fallacy.

Genetic fallacy

The genetic fallacy occurs when one argues (or, more commonly, implies) that the origin of something (e.g., a theory, idea, policy, etc.) is a reason for rejecting (or accepting) it. For example, suppose that Jack is arguing that we should allow physician assisted suicide and Jill responds that that idea first was used in Nazi Germany. Jill has just committed a genetic fallacy because she is implying that because the idea is associated with Nazi Germany, there must be something wrong with the idea itself. What she should have done instead is explain what, exactly, is wrong with the idea rather than simply assuming that there must be something wrong with it since it has a negative origin. The origin of an idea has nothing inherently to do with its truth or plausibility. Suppose that Hitler constructed a mathematical proof in his early adulthood (he did not, but just suppose). The validity of that mathematical proof stands on its own; the fact that Hitler was a horrible person has nothing to do with whether the proof is good. Likewise, with any other idea: ideas must be assessed on their own merits and the origin of an idea is neither a merit nor demerit of the idea.

Although genetic fallacies are most often committed when one associates an idea with a negative origin, it can also go the other way: one can imply that because the idea has a positive origin, the idea must be true or more plausible. For example, suppose that Jill argues that the Golden Rule is a good way to live one’s life because the Golden Rule originated with Jesus in the Sermon on the Mount (it did not, actually, even though Jesus does state a version of the Golden Rule). Jill has committed the genetic fallacy in assuming that the (presumed) fact that Jesus is the origin of the Golden Rule has anything to do with whether the Golden Rule is a good idea.

Appeal to Consequences

The appeal to consequences fallacy is like the reverse of the genetic fallacy: whereas the genetic fallacy consists in the mistake of trying to assess the truth or reasonableness of an idea based on the origin of the idea, the appeal to consequences fallacy consists in the mistake of trying to assess the truth or reasonableness of an idea based on the (typically negative) consequences of accepting that idea. For example, suppose that the results of a study revealed that there are IQ differences between different races (this is a fictitious example, there is no such study that I know of). In debating the results of this study, one researcher claims that if we were to accept these results, it would lead to increased racism in our society, which is not tolerable. Therefore, these results must not be right since if they were accepted, it would lead to increased racism. The researcher who responded in this way has committed the appeal to consequences fallacy. Again, we must assess the study on its own merits. If there is something wrong with the study, some flaw in its design, for example, then that would be a relevant criticism of the study. However, the fact that the results of the study, if widely circulated, would have a negative effect on society is not a reason for rejecting these results as false. The consequences of some idea (good or bad) are irrelevant to the truth or reasonableness of that idea. Notice that the researchers, being convinced of the negative consequences of the study on society, might rationally choose not to publish the study (for fear of the negative consequences). This is totally fine and is not a fallacy. The fallacy consists not in choosing not to publish something that could have adverse consequences, but in claiming that the results themselves are undermined by the negative consequences they could have. The fact is, sometimes truth can have negative consequences and falsehoods can have positive consequences. This just goes to show that the consequences of an idea are irrelevant to the truth or reasonableness of an idea.


Appeal to Authority (Ad Verecundiam)

In a society like ours, we have to rely on authorities to get on in life. For example, the things I believe about electrons are not things that I have ever verified for myself. Rather, I have to rely on the testimony and authority of physicists to tell me what electrons are like. Likewise, when there is something wrong with my car, I have to rely on a mechanic (since I lack that expertise) to tell me what is wrong with it. Such is modern life. So there is nothing wrong with needing to rely on authority figures in certain fields (people with the relevant expertise in that field)—it is inescapable. The problem comes when we invoke someone whose expertise is not relevant to the issue for which we are invoking it. For example, suppose that a group of doctors sign a petition to prohibit abortions, claiming that abortions are morally wrong. If Bob cites that fact that these doctors are against abortion, therefore abortion must be morally wrong, then Bob has committed the appeal to authority fallacy. The problem is that doctors are not authorities on what is morally right or wrong. Even if they are authorities on how the body works and how to perform certain procedures (such as abortion), it does not follow that they are authorities on whether or not these procedures should be performed—the ethical status of these procedures. It would be just as much an appeal to consequences fallacy if Melissa were to argue that since some other group of doctors supported abortion, that shows that it must be morally acceptable. In either case, since doctors are not authorities on moral issues, their opinions on a moral issue like abortion is irrelevant. In general, an appeal to authority fallacy occurs when someone takes what an individual says as evidence for some claim, when that individual has no particular expertise in the relevant domain (even if they do have expertise in some other, unrelated, domain).


6.3 Formal Fallacies of Probability

In this and the remaining sections of this chapter, we will consider some formal fallacies of probability. These fallacies are easy to spot once you see them, but they can be difficult to detect because of the way our minds mislead us—analogous to the way our minds can be misled when watching a magic trick. In addition to introducing the fallacies, I will suggest some psychological explanations for why these fallacies are so common, despite how easy they are to see once we have spotted them.

The conjunction fallacy is best introduced with an example.

Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.

Given this information about Linda, which of the following is more probable?

Linda is a bank teller.

Linda is a bank teller and is active in the feminist movement.

If you are like most people who answer this question, you will answer “b.” But that cannot be correct because it violates the basic rules of probability. In particular, notice that option b contains option a (i.e., Linda is a bank teller). But option b also contains more information—that Linda is also active in the feminist movement. The problem is that a conjunction can never be more probable than either one of its conjuncts. Suppose we say it is very probable that Linda a bank teller (how boring, given the description of Linda which makes her sound interesting!). Let usset the probability low, say .4. Then what is the probability of her being active in the feminist movement? Let usset that high, say .9. However, the probability that she is both a bank teller and active in the feminist movement must be computed as the probability of a conjunction, like this:

.4 × .9 = .36

So, given these probability assignments (which I have just made up but seem fairly plausible), the probability of Linda being both a bank teller and active in the feminist movement is .36. But .36 is a lower probability than .4, which was the probability that she is bank teller. So, option b cannot be more probable than option a. Notice that even if we say it is absolutely certain that Linda is active in the feminist movement (i.e., we set the probability of her being active in the feminist movement at 1), option b is still only equal to the probability of option a, since (.4)(1) = .4.

Sometimes it is easy to spot conjunction fallacies. Here is an example that illustrates that we can in fact easily see that a conjunction is not more probable than either of its conjuncts.

Mark is drawing cards from a shuffled deck of cards. Which is more probable?

Mark draws a spade

Mark draws a spade that is a 7

In this case, it is clear which of the options is more probable. Clearly a is more probable since it requires less to be true. Option a would be true even if option b is true. But option a could also be true even if option b were false (i.e., Mark could have drawn any other card from the spades suit). The chances of drawing a spade of any suit is ¼ (or .25) whereas the chances of drawing a 7 of spades is computed using the probability of the conjunction:

P(drawing a spade) = .25

P(drawing a 7) = 4/52 (since there are four 7s in the deck of 52) = .077

Thus, the probability of being both a spade and a 7 = (.25)(.077) = .019

Since .25 > .019, option a is more probable (not that you had to do all the calculations to see this).

Thus, there are cases where we can easily avoid committing the conjunction fallacy. So, what is the difference between this case and the Linda case? The Nobel Prize-winning psychologist, Daniel Kahneman (and his long-time collaborator, Amos Tversky), has for many years suggested a psychological explanation for this difference. The explanation is complex, but I can give you the gist of it quite simply. Kahneman suggests that our minds are wired to find patterns and many of these patterns we find are based on what he calls “representativeness.” In the Linda case, the idea of Linda being active in the feminist movement fits better with the description of Linda as a philosophy major, as being active in social justice movements, and, perhaps, as being single. We build up a picture of Linda and then we try to match the descriptions to her. “Bank teller” does not really match anything in the description of Linda. That is, the description of Linda is not representative of a bank teller. However, for many people, it is representative of a feminist. Thus, our minds more or less automatically see the match between representativeness of the description of Linda and option b, which mentions she is a feminist. Kahneman thinks that in cases like these, our minds substitute a question of representativeness for the question of probability, thus answering the probability question incorrectly. We are distracted from the probability question by seeking representativeness, which our minds more automatically look for and think about than probability. For Kahneman, the psychological explanation is needed to explain why even trained mathematicians and those who deal regularly with probability still commit the conjunction fallacy. The psychological explanation that our brains are wired to look for representativeness, and that we unwittingly substitute the question of representativeness for the question of probability, explains why even experts make these kinds of mistakes.

The Base Rate Fallacy

Consider the following scenario. You go in for some testing for some health problems you have been having and after a number of tests, you test positive for colon cancer. What are the chances that you really do have colon cancer? Let us suppose that the test is not perfect, but it is 95% accurate. That is, in the case of those who really do have colon cancer, the test will detect the cancer 95% of the time (and thus miss it 5% of the time). (The test will also misdiagnose those who do not actually have colon cancer 5% of the time.) Many people would be inclined to say that, given the test and its accuracy, there is a 95% chance that you have colon cancer. However, if you are like most people and are inclined to answer this way, you are wrong. In fact, you have committed the fallacy of ignoring the base rate (i.e., the base rate fallacy).

The base rate in this example is the rate of those who have colon cancer in a population. There is very small percentage of the population that actually has colon cancer (Let us suppose it is .005 or .5%), so the probability that you have it must take into account the very low probability that you are one of the few that have it. That is, prior to the test (and not taking into account any other details about you), there was a very low probability that you have it—that is, a half of one percent chance (.5%). The test is 95% accurate but given the very low prior probability that you have colon cancer, we cannot simply now say that there is a 95% chance that you have it. Rather, we must temper that figure with the very low base rate. Here is how we do it. Let us suppose that our population is 100,000 people. If we were to apply the test to that whole population, it would deliver 5000 false positives. A false positive occurs when a test registers that some feature is present, when the feature is not really present. In this case, the false positive is when the test for colon cancer (which will give false positives in 5% of the cases) says that someone has it when they really do not. The number of people who actually have colon cancer (based on the stated base rate) is 500, and the test will accurately identify 95 percent of those (or 475 people). So, what you need to know is the probability that you are one who tested positive and actually has colon cancer rather than one of the false positives. And what is the probability of that? It is simply the number of people who actually have colon cancer (500) divided by the number that the test would identify as having colon cancer. This latter number includes those the test would misidentify (5000) as well as the number it would accurately identify (475)—thus the total number the test would identify as having colon cancer would be 5475. So, the probability that you have it, given the positive test = 500/5475 = .091 or 9.1%. So, the probability that you have cancer, given the evidence of the positive test is 9.1%. Thus, contrary to our initial reasoning that there was a 95% chance that you have colon cancer, the chance is only a tenth of that—it is less than 10%! In thinking that the probability that you have cancer is closer to 95% you would be ignoring the base rate of the probability of having the disease in the first place (which, as we have seen, is quite low). This is the signature of any base rate fallacy. Before closing this section, Let us look at one more example of a base rate fallacy.

Suppose that the government has developed a machine that is able to detect terrorist intent with an accuracy of 90%. During a joint meeting of congress, a highly trustworthy source says that there is a terrorist in the building. (Let us suppose, for the sake of simplifying this example, that there is in fact a terrorist in the building.) In order to determine who the terrorist is, the building security seals all the exits, rounds up all 3000 people in the building and uses the machine to test each person. The first 30 people pass without triggering a positive identification from the machine, but on the very next person, the machine triggers a positive identification of terrorist intent. The question is: what are the chances that the person who set off the machine really is a terrorist? Consider the following three possibilities:

  • 90%, b) 10%, or c) .3%.

If you answered 90%, then you committed the base rate fallacy again. The actual answer is “c”—less than 1%! Here is the relevant reasoning. The base rate here is that it is exceedingly unlikely that any individual is a terrorist, given that there is only one terrorist in the building and there are 3000 people in the building. That means the probability of any one person being a terrorist, before any results of the test, is exceedingly low: 1/3000. Since the test is 90% accurate, that means that out of the 3000 people, it will misidentify 10% of them as terrorists = 300 false positives. Assuming the machine does not misidentify the one actual terrorist, the machine will identify a total of 301 individuals as those “possessing terrorist intent.” The probability that any one of them actually possesses terrorist intent is 1/301 = .3%. So, the probability is drastically lower than 90%. It is not even close. This is another good illustration of how far off probabilities can be when the base rate is ignored.

The small numbers fallacy

Suppose a study showed that of the 3,141 counties of the United States, the incidence of kidney cancer was lowest in those counties which are mostly rural, sparsely populated, and located in traditionally Republican states. In fact, this is true. What accounts for this interesting finding? Most people would be tempted to look for a causal explanation—to look for features of the rural environment that account for the lower incidence of cancer. However, they would be wrong (in this case) to do so. It is easy to see why once we consider the counties that have the highest incidence of kidney cancer: they are counties that are mostly rural, sparsely populated, and located in traditionally Republican states! So whatever it was you thought might account for the lower cancer rates in rural counties cannot be the right explanation, since these counties also have the highest rates of cancer. It is important to understand that it is not the same counties that have the highest and lowest rates—for example, county X does not have both a high and a low cancer rate (relative to other U.S. counties). That would be a contradiction (and so cannot possibly be true). Rather, what is the case is that counties that have the highest kidney cancer rates are “mostly rural, sparsely populated, and located in traditionally Republican states” but also counties that have the lowest kidney cancer rates are “mostly rural, sparsely populated, and located in traditionally Republican states.” How could this be? Before giving you the explanation, I’ll give you a simpler example and see if you can figure it out from that example.

Suppose that a jar contains equal amounts of red and white marbles. Jack and Jill are taking turns drawing marbles from the jar. However, they draw marbles at different rates. Jill draws 5 marbles at a time while Jack draws 2 marbles at a time. Who is more likely to draw either all red or all white marbles more often: Jack or Jill?

The answer here should be obvious: Jack is more likely to draw marbles of all the same color more often since Jack is only drawing 2 marbles at a time. Since Jill is drawing 5 marbles at a time, it will be less likely that her draws will yield marbles of all the same color. This is simply a fact of sampling and is related to the sampling errors discussed in section 3.1. A sample that is too small will tend not to be representative of the population. In the marbles case, if we view Jack’s draws as samples, then his samples, when they yield marbles of all the same color, will be far from representative of the ratio of marbles in the jar, since the ratio is 50/50 white to red and his draws sometimes yield 100% red or 100% white. Jill, on the other hand, will tend not to get as unrepresentative a sample. Since Jill is drawing a larger number of marbles, it is less likely that her samples would be drastically off in the way Jack’s could be. The general point to be taken from this example is that smaller samples tend to the extremes—both in terms of overrepresenting some feature and in underrepresenting that same feature.

Can you see how this might apply to the case of kidney cancer rates in rural, sparsely populated counties? There is a national kidney cancer rate which is an average of all the kidney cancer rates of the 3,141 counties in the U.S. Imagine ranking each county in terms of the cancer rates from highest to lowest. The finding is that there is a relatively larger proportion of the sparsely populated counties at the top of this list, but also a relatively larger proportion of the sparsely populated counties at the bottom of the list. But why would it be that the more sparsely populated counties would be overrepresented at both ends of the list? The reason is that these counties have smaller populations, so they will tend to have more extreme results (of either the higher or lower rates). Just as Jack is more likely to get either all white marbles or all red marbles (an extreme result), the less populated counties will tend to have cancer rates that are at the extreme, relative to the national average. And this is a purely statistical fact; it has nothing to do with features of those environments causing the cancer rate to be higher or lower. Just as Jack’s extreme draws have nothing to do with the way he is drawing (but are simply the result of statistical, mathematical facts), the extremes of the smaller counties have nothing to do with features of those counties, but only with the fact that they are smaller and so will tend to have more extreme results (i.e., cancer rates that are either higher or lower than the national average).

The first take home lesson here is that smaller groups will tend towards the extremes in terms of their possession of some feature, relative to larger groups. We can call this the law of small numbers. The second take home message is that our brains are wired to look for causal explanations rather than mathematical explanations, and because of this we are prone to ignore the law of small numbers and look for a causal explanation of phenomena instead. The small numbers fallacy is our tendency to seek a causal explanation for some phenomenon when only the law of small numbers is needed to explain that phenomenon.

We will end this section with a somewhat humorous and incredible example of a small numbers bias that, presumably, wasted billions of dollars. Some time ago, the Gates foundation (which is the charitable foundation of Microsoft founder, Bill Gates) donated 1.7 billion to research a curious finding: smaller schools tend to be more successful than larger schools. That is, if you consider a rank ordering of the most successful schools, the smaller schools will tend to be overrepresented near the top (i.e., there is a higher proportion of them near the top of the list compared to the proportion of larger schools at the top of the list). This is the finding that the Gates Foundation invested 1.7 billion dollars to help understand. In order to do so, they created smaller schools, sometimes splitting larger schools in half. However, none of this was necessary. Had the Gates Foundation (or those advising them) looked that the characteristics of the worst schools, they would have found that those schools also tended to be smaller! The “finding” is merely a result of the law of small numbers: smaller groups tend towards the extremes (on both ends of a spectrum) more so than larger groups. In this case, the fact that smaller schools tend to be both more successful and less successful is explained in the same way as we explain why Jack tends to get either all red or all white marbles more often than Jill.

Regression to the mean fallacy

Humans are prone to see causes even when no such cause is present. For example, if I have just committed some wrong and then immediately after the thunder cracks, I may think that my wrong action caused the lightning (e.g., because the gods were angry with me). The term “snake oil” refers to a product that promises certain (e.g., health) benefits but is actually fraudulent and has no benefits whatsoever. For example, consider a product that is supposed to help you recover from a common cold. You take the medicine and then within a few days, you are all better! No cold! It must have been the medicine. Or maybe you just regressed to the mean. Regression to the mean describes the tendency of things to go back to normal or to return to something close to the relevant statistical average. In the case of a cold, when you have a cold, you are outside of the average in terms of health. But you will naturally return to the state of health, with or without the “medicine.” If anyone were to try to convince you to buy such a medicine, you should not. Because the fact that you got better from your cold more likely has to do with the fact that you will naturally regress to the mean (return to normal) than it has to do with the special medicine.

Another example. Suppose you live in Lansing and it has been over 100 degrees for two weeks straight. Someone says that if you pay tribute and do a special dance to Baal, the temperature will drop. Suppose you do this and the temperature does drop. Was it Baal or just regression to the mean? Probably regression to the mean, unless we have some special reason for thinking it is Baal. The point is, extreme situations tend to regress towards less extreme, more average situations. Since it is very rare for it to ever be over 100 degrees in Lansing, the fact that the temperature drops is to be expected, regardless of one’s prayers to Baal.

Suppose that a professional golfer has been on a hot streak. She has been winning every tournament she enters by ten strokes—She is beating the competition like they were middle school golfers. She is just playing so much better than them. Then something happens. The golfer all of a sudden begins to play like an average player. What explains her fall from greatness? The sports commentators speculate: could it be that she switched her caddy, or that it is warmer now than it was when she was on her streak, or perhaps it was fame that went to her head once she had started winning all those tournaments? Chances are, none of these are the right explanation because no such explanation is needed. Most likely she just regressed to the mean and is now playing like everyone else—still like a pro, just not like a golfer who is out of this world good. Even those who are skilled can get lucky (or unlucky) and when they do, we should expect that eventually that luck will end and they will regress to the mean.

As these examples illustrate, one commits the regression to the mean fallacy when one tries to give a causal explanation of a phenomenon that is merely statistical or probabilistic in nature. The best way to rule out that something is not to be explained as regression to the mean is by doing a study where one compares two groups. For example, suppose we could get our snake oil salesman to agree to a study in which a group of people who had colds took the medicine (experimental group) and another group of people did not take the medicine or took a placebo (control group). In this situation, if we found that the experimental group got better and the control group did not, or if the experimental group got better more quickly than the control group, then perhaps we would have to say that maybe there is something to this snake oil medicine. But without the evidence of a control for comparison, even if lots of people took the snake oil medicine and got better from their colds, it would not prove anything about the efficacy of the medicine.

Gambler’s fallacy

The gambler’s fallacy occurs when one thinks that independent, random events can be influenced by each other. For example, suppose I have a fair coin and I have just flipped 4 heads in a row. Erik, on the other hand, has a fair coin that he has flipped 4 times and gotten tails. We are each taking bets that the next coin flipped is heads. Who should you bet flips the head? If you are inclined to say that you should place the bet with Erik since he has been flipping all tails and since the coin is fair, the flips must even out soon, then you have committed the gambler’s fallacy. The fact is, each flip is independent of the next, so the fact that I have just flipped 4 heads in a row does not increase or decrease my chances of flipping a head. Likewise for Erik. It is true that as long as the coin is fair, then over a large number of flips we should expect that the proportion of heads to tails will be about 50/50. But there is no reason to expect that a particular flip will be more likely to be one or the other. Since the coin is fair, each flip has the same probability of being heads and the same probability of being tails—50%.

 

Chapter Review

Review what you have learned about fallacies in this chapter by watching the following short videos produced by the Critical Thinker’s Academy:

Fallacies that violate Rules of Rational Argumentation

Affirming the Consequent

The Ad Hominem Fallacy

Ad Hominem (Guilt by Association)

Fallacies: Appeal to Hypocrisy

Fallacies: Appeal to Popular Belief

Fallacies: Appeal to Authority

Fallacies: False Dilemma

The Straw Man Fallacy

The “Red Herring” Fallacy

Fallacies: Slippery Slope

Fallacies: Begging the Question (narrow sense) Fallacies: Begging the Question (broad sense)


1 The information in the next three sections is adapted from Fundamental Methods of Logic (Knachel), licensed under a Creative Commons Attribution 4.0 International License.
2 Kahneman gives this explanation numerous places, including, most exhaustively (and for a general audience) in his 2011 book, Thinking Fast and Slow. New York, NY: Farrar, Straus and Giroux.

License

An Introduction to Reason and Argument Copyright © by John Mack. All Rights Reserved.

Share This Book