Humans are a storytelling species. We explain concepts using analogous stories; we automatically assign meaning and purpose to patterns; and we find anecdotes particularly moving. Our heuristics—the rules of thumb we use in our reasoning—can create and maintain wonderful stories, at the cost of nuance and accuracy: such as when we assign more blame to those we dislike, and more merit to those we like, creating a clear separation between the good guys and the bad guys; when we feel that rhymes imply truth; assess ambiguous information as confirming our preconceived narratives, and evaluate arguments as if they were stories—on the basis of their believability, instead of their logical structures and evidential bases. Overcoming these tendencies is a crucial step on the path towards intellectual progress, since reality has shown, time and time again, that it is under no obligation to conform to the stories we tell about it.
Our storytelling tendencies are particularly misleading when it comes to probabilistic reasoning. Nevertheless, if we recognize our storytelling tendencies, and try to replace them with something more similar to the “stories” used by scientists, we might improve our probabilistic reasoning with relative ease. Framing claims in terms of hypotheses, which incorporate a sense of their intrinsic likelihood and implications, might help us overcome some of our misleading storytelling tendencies.
Understanding Our Hypotheses: The Conjunction Fallacy
The first step in hypothesis-framing is to construct a claim as a hypothesis with an intrinsic probability, rather than as a story. This helps us avoid the conjunction fallacy, which occurs when we think that the co-occurrence of two events is more believable—and thus more intrinsically likely—than the occurrence of at least one of them. This tendency has been illustrated by a famous study in which participants were presented with the following question:
Linda is 31 years old, single, outspoken and very bright. She majored in philosophy. As a student, she was deeply concerned about issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.
Which is more probable?
- Linda is a bank teller.
- Linda is a bank teller and is active in the feminist movement.
Most of the participants chose option 2, although it cannot be more probable, since it adds an additional event to option 1. However, option 2 fits the story better and therefore seemed more believable to many. It is as if participants treated Linda’s occupation as a bank teller as a given, and then reasoned that Linda was more likely to be an active feminist than not. Explicitly defining the two options as hypotheses should clarify that Linda’s being a bank teller is part of each hypothesis, not a given; whereas her being a feminist activist is an addition made by one hypothesis, but not excluded by the other. If we judge the claims as hypotheses, rather than stories, we should easily notice that an extra detail cannot make the second hypothesis more intrinsically likely, even if it makes the overall story seem more believable.
We may notice a more organic manifestation of the conjunction fallacy when an outlandish claim is supplemented by a less outlandish claim, resulting in a better story that is nevertheless less likely to be true. For example, Joseph Smith, the founder of Mormonism, claimed to have uncovered golden plates with ancient Egyptian inscriptions which had been buried in a hill in Manchester, New York. He elaborated on this outlandish claim by adding that an angel appeared to him, and guided him to the plates. Claims of supernatural appearances were not so unusual in 1820s New York, which was going through the religious fervor of the Second Great Awakening, so the mention of angelic guidance probably made Joseph Smith’s story more convincing—especially since we often remember information as true or false, rather than uncertain, meaning that the less outlandish claim could be treated as a given, despite its lack of independent substantiation. However, if we frame Joseph Smith’s story as a hypothesis, we should easily realize that receiving angelic guidance and finding golden plates is even less intrinsically likely than just finding golden plates. If his word alone is not enough to convince us that he found the golden plates, then it should be even further from enough to convince us that he received angelic guidance and found the plates.
The Raven Paradox and the Nature of Evidence
After clearly defining our hypotheses, we need to consider their implications—that is, their predictions. The results of this process, if thoroughly considered, may conflict with our mistaken intuitions about evidence. Take the paradox of the raven. It starts with the proposition that all ravens are black. This implies that if we see a raven, it should be black. Therefore, if we see a black raven, the raven’s blackness provides evidence in favor of the proposition. Furthermore, if all ravens are black, it follows that every non-black thing we encounter should not be a raven. This leads to the seemingly paradoxical conclusion that every non-black non-raven thing around you provides evidence that all ravens are black. If your couch is not black, that is evidence that all ravens are black. Can you feel those storytelling tendencies kicking in?
While the idea that a non-black couch supports the proposition all ravens are black is deeply counterintuitive, thinking about the matter in terms of hypotheses should help us resolve the apparent paradox. First, let’s frame the proposition and its negation as competing hypotheses: H1) all ravens are black; and H2) not all ravens are black. Next, we compare the predictions given by each hypothesis: 1) a raven is more likely to be black under hypothesis 1 than hypothesis 2; and 2) a non-black thing is more likely to be a non-raven under hypothesis 1 than hypothesis 2. As such, both black ravens and non-black non-ravens support the proposition all ravens are black, because both are better predicted by the proposition than by its negation.
I suspect that the raven paradox seems paradoxical because we often judge propositions by how much sense they make as stories about our observations, instead of treating them as hypotheses that should be judged by their predictive power. This is evidenced by our tendency to fall for unfalsifiable explanations, which can create believable narratives after the fact, yet make no predictions in advance. One of Karl Popper’s primary examples of unfalsifiable explanations was Alfred Adler’s inferiority complex concept, which Adler could use to convincingly explain almost any behavior. For example, if a man endangers his life in a hopeless attempt to save a drowning child, Adler could explain that the man’s inferiority complex gave him a need to demonstrate superiority through bravery, and thus motivated his action. If, however, the man did not act to save the child, Adler could explain that the man’s inferiority complex gave him a need to demonstrate superiority through rational restraint, and thus motivated his inaction. The concept of the inferiority complex can be used to construct compelling narratives after the fact, yet it has no predictive power, and is therefore functionally equivalent to admitting that we know nothing about the man concerned. This means that the man’s response is unrelated to the presence or absence of his inferiority complex; therefore his response neither supports nor is explained by the concept of the inferiority complex.
Using Evidence: The Monty Hall Problem
No other statistical puzzle comes so close to fooling all the people all the time … even Nobel physicists systematically give the wrong answer, and … they insist on it, and they are ready to berate in print those who propose the right answer.—Massimo Piattelli-Palmarini, The Power of Logical Thinking.
Now that we have seen how to specify hypotheses, and derive predictions from them, let’s consider how to use evidence to examine the Monty Hall problem. The problem was first posed by Marilyn vos Savant in her 1990 Parade column. She received thousands of letters, many of them written by individuals with PhDs, who strongly disagreed with vos Savant’s correct solution. The Monty Hall problem was later studied using research participants, almost all of whom answered it incorrectly. In one study, when experimenters repeatedly simulated the problem, pigeons needed fewer repetitions than undergraduate students to learn the correct solution. The problem goes as follows:
Suppose you’re on a game show, and you’re given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what’s behind the doors, opens another door, say No. 3, which has a goat. He then says to you, ‘Do you want to pick door No. 2?’ Is it to your advantage to switch your choice?
This problem becomes a paradox when one considers that the answer depends on whether or not the host knows what is hidden behind each door. While the Monty Hall problem has fooled an impressive number of people, there is no reason to be fooled if we frame it in terms of hypotheses, predictions and corresponding evidence. Let’s start with the standard problem, in which you pick door 1 and the host, who knows which door hides the car, opens door 3. Under the hypothesis that you picked the right door (door 1), which door would you expect the host to open? Either door 2 or door 3: the chances are 50:50. However, under the hypothesis that you picked the wrong door and the car is behind door 2, you would fully expect (100%) the host to open door 3, since he cannot reveal the car prematurely. The evidence therefore supports the hypothesis that the car is behind door 2, which is indeed more likely to conceal the car. However, if the host does not know where the car is hidden, then neither hypothesis will have any predictive capacity, since the host will always randomly open one of the two doors. Therefore, the evidence would support neither hypothesis over the other. Indeed, if the host does not know where the car is hidden, then the two doors are equally likely to conceal the car. By framing our options as hypotheses, and considering the evidence in light of their predictions, we can easily solve the Monty Hall problem.
Application: Extraordinary Claims Require Extraordinary Evidence
Having covered the essentials of hypothesis-framing, we can now use them to make sense of Carl Sagan’s oft-misunderstood adage: “extraordinary claims require extraordinary evidence.” While this saying contains an important kernel of truth, I doubt that many people can clearly explain what makes a claim extraordinary, or what makes evidence extraordinary. However, these two elements can be easily understood using a hypothesis framework.
From the perspective of a hypothesis framework, being extraordinary means that a hypothesis is extremely intrinsically unlikely given our prior knowledge, whereas extraordinary evidence is 1) extremely unlikely if the extraordinary hypothesis is untrue and 2) is much better predicted by the hypothesis than by its negation. For example, the claim that I am the wealthiest person in the world is extraordinarily improbable, since only one of the world’s nearly eight billion individuals is the wealthiest. To support my claim, I could roll a dice 20 times, and point to the extremely unlikely sequence as evidence for my claim. However, while it is true that this evidence (the sequence of dice throws) is extremely unlikely even if my claim is untrue (the hypothesis is negated), it is no better predicted if my claim is true, and therefore does not provide any evidence whatsoever with regard to my claim. Next, I could take a $100 bill out of my wallet, and offer it as evidence of my vast wealth. Now we have actual evidence, since my immediate access to a $100 bill is better predicted by my claim to wealth than by its negation. However, the evidence is not extremely unlikely if I am not the world’s richest person, and is therefore not strong enough to overcome my claim’s intrinsic improbability. Next, we could go to Forbes‘ trusted annual list of billionaires, and check my ranking there. If I am the world’s wealthiest man, the list should state this. If I am not the world’s wealthiest man, the list is incredibly unlikely to state this. Therefore, my appearance at the top of Forbes’ list would count as extraordinary evidence, as it is very likely if my claim is true, while it is extremely unlikely under its negation (i.e. if my claim is untrue).
The End of the Beginning
While hypothesis framing can help us avoid many of the pitfalls we face as a storytelling species, there are other important issues that damage our reasoning capacity and that will not be solved by hypothesis framing alone. We tend to misrepresent disfavored hypotheses as weaker versions of themselves, which we then easily disprove. We put baseless limits on the variety of possible hypotheses, which often results in false dichotomies. We use the same evidence to both adjust and justify our hypotheses, which often turns the evidence into mere decoration for our favored hypotheses. And we tend to search for and overestimate the strength of evidence in support of our favored hypotheses, while ignoring or underestimating the strength of counter-evidence. Adopting hypothesis frames can improve our reasoning, yet it cannot take away the need to honestly engage with competing hypotheses and the full breadth of evidence. These are skills that we may never fully master. Yet if we wish to live in a more reasonable world, it is up to us to reason as best as we can, and encourage others to do the same.
Benny is on track here that we are very bad at reasoning. I have seen people commit to a premise that is absurd or impossible, and they then will not back down–which is not a logical fallacy but a pride issue. It is known that people tend to become more conservative with age, particularly when they buy a house and have kids. This means they changed their minds about their values. At a societal scale, we have changed our minds about many issues such as race. Young people can be blissfully unaware that anyone could change their minds which is one of the reasons they are so easily suckered into radical movements. For those older people who have changed their minds, it often happened over years and not in a single moment so they may not themselves be aware of their own change. There are complications though. Someone could be… Read more »
“Which is more probable?
Linda is a bank teller.
Linda is a bank teller and is active in the feminist movement.”
The use of the results on this set of questions as evidence of the conjunction fallacy is questionable. What if the readers of these questions parsed them as follows:
Linda is a bank teller.
{tacit reading: “Given that”} Linda is a bank teller, Linda is active in the feminist movement. Before you infer a logical error, suspect a reading error born of the reader’s tacit assumptions about what the writer intends the writing to be read as implying.
Ah. In the retellings of this scenario I’ve seen, I’ve never encountered any dictum that the hose MUST open a second door. Is the contestant told in advance that the host MUST open a second door? Otherwise, the contestant is logical in sticking with the first choice, no?
I think what I was trying to drive at in my earlier comment on Marilyn Vos Savant is that there is no necessary direct connection between IQ and what most of us would regard as substantive intellectual and cultural achievement. Having a very high IQ by itself does not determine the sort of issues, questions, and problems to which you choose (or that your society and culture push you into choosing) to apply your intellect. From what I know of her, Marilyn Vos Savant has largely devoted herself to writing about logical and mathematical puzzles and brain-teasers for a magazine column, an activity that I gather she is generally considered very, very good at. However, she has never to my knowledge applied herself to literary creation like Jane Austen, the Bronte sisters, George Eliot, Edith Wharton, Virginia Woolf, or Doris Lessing, or to philosophy and social & political thought like… Read more »
Benny Markovich cites Marilyn Vos Savant on the Monty Hall problem. Now this might be a bit off-topic, but Marilyn Vos Savant is (or was) mainlu famous as the world’s smartest woman, supposedly with an IQ way, way off the charts. If she’s so super-ultra-bright, how come she never got around to writing the books and essays of other very bright women like Hannah Arendt, Virginia Woold, and Simone De Beauvoir? Why did Ms. Vos Savant allow those other ladies, who were certainly sort of pretty bright but apparently nor quite in her ultra-super-genius range, get all the credit fot writing those books? Just wondering!
“Let’s start with the standard problem, in which you pick door 1 and the host, who knows which door hides the car, opens door 3,” and it’s a goat.
If the host knows where the car is hidden and doesn’t want to hand over this more expensive prize, and if Door 1 conceals a goat, wouldn’t the host as a rational being simply open Door 1 immediately and say, “Here’s your goat,” rather than fooling around with other doors and possibly giving the big prize away and irritating the stockholders?
Consequently, if the host did NOT open the chosen door but instead opened a goat door AFTER the contestant had chosen Door 1, is the contestant not reasonable in suspecting that the host is attempting to divert him/her away from the big expensive prize? And therefore wouldn’t the contestant stick with Door 1?
Just askin’.