Peter Boghossian is an assistant professor of philosophy at Portland State University. He is the author of A Manual for Creating Atheists (2013) and, with James Lindsay, of the forthcoming How to Have Impossible Conversations: A Very Practical Guide. Maarten Boudry teaches at the Department of Philosophy and Moral Sciences at Ghent University. He is the co-editor, with Massimo Pigliucci, of Philosophy of Pseudoscience: Reconsidering the Demarcation Problem (2013) and Science Unlimited?: The Challenges of Scientism (2018) and has authored/co-authored two books in Dutch.
The two philosophers exchanged correspondence on doxastic voluntarism, the idea that you can make conscious choices about what you believe. The consensus view, summed up by Boghossian, is that “one can only believe what one thinks is true.” However, Boghossian argues that, under certain circumstances, you have the freedom to fully believe something you suspect or even know to be false.
The conversation between Boudry and Boghossian took place on the platform Letter. Letter provides a space for public, shareable, one-on-one conversations, free from the gladiatorial ferocity of Twitter and the distractions of other social media. I’ve written more about our aims at Letter here. You can read the full correspondence on the Letter site.
What Does It Mean to Hold a Belief?
So what do we mean when we say we believe something—as opposed to hoping, fearing, suspecting, imagining or entertaining the notion as a fantasy? One common definition is that belief involves commitment. Carl Ginet writes that, when you believe a proposition p, you “count on its being the case that p,” you are willing to stake something on that assumption and are “dismissive or complacent … toward the possibility of … its turning out that not-p.” He provides the example of someone who, halfway into a long car journey, suddenly wonders whether he locked his front door before leaving. He believes he did and therefore decides not to turn back—risking the possibility that the door is open and he will be robbed. We can assess a person’s beliefs, then, by asking what do her actions imply that she assumes to be true? (This is why John Hickenlooper famously drank a glass of fracking fluid to prove its safety.)
Pamela Hieronymi describes this commitment in terms of arguments, rather than actions. It involves a willingness to be “answerable to certain questions and criticisms,” able, as Jon Rosen puts it, “to confidently defend my view with a set of arguments.” These arguments must make sense to you personally, i.e., as Gray Maddrey describes it here, you must be prepared to answer the question “what reason do you have for choosing?”—even if only to yourself.
Richard Dawkins’ analogy of memes may help here. Beliefs are memes that “aim” (figuratively) to gain traction within our minds, so that we can propagate them, by influencing others. To believe something, we must be convinced it meets certain requirements. We cannot believe at random. Faced with a potential belief, we are like employers sizing up candidates. Perhaps we would love to work with a particular applicant but, unfortunately, we cannot hire her because—for example—she is not a resident of our country and we are not permitted to employ non-residents. The equivalent of residency would be the belief’s compatibility with our internal, previously established criteria for determining whether or not something is true.
How Can You Make Yourself Believe Something?
To think that we can choose our beliefs implies a fundamental assumption that we possess meaningful free will. I am a non-believer in free will—as I have discussed elsewhere. For the sake of argument here, though, I’ll assume that some of our actions are under our complete control. But is forming a belief about something one of those actions?
Philosophers distinguish between direct and indirect voluntary control. We have direct control over actions that we can perform instantly, competently, at will. For example, if you are able bodied, you can choose to stand on your tiptoes right now. But if you attempt a double pirouette en passé, you will probably wobble and fall. You could, however, begin daily ballet classes tomorrow and, eventually, you will probably be able to execute pirouettes with ease: this is indirect voluntary control. You choose to take an action now (go to ballet class) in order to produce an effect (pirouetting) in the future.
Direct doxastic voluntarism—direct control over your beliefs—implies that you can consciously choose to believe something right now. Indirect doxastic voluntarism means that you can begin to research, investigate and weigh up the opinions, which will probably lead you to change your beliefs in the future. You might, for example, wish to discard negative beliefs about yourself: that you are worthless, a failure, unlovable, etc. You probably cannot do this directly and instantly: but you can sign up for cognitive behavioural therapy (CBT), which is designed to demonstrate that those beliefs about your worth and potential for happiness are erroneous.
The problem is not just that results are not guaranteed—in either case—but that even this indirect form of control is only likely to work in very specific circumstances. CBT deals with subjective evaluations, which are more closely allied to feelings and attitudes than to beliefs. I can choose to feel pride that I am able to dance a pirouette; or ashamed that it is not that elegant because I don’t practise enough. There is wiggle room there because we are notoriously bad at judging ourselves—and, crucially, we are aware of this. Both Dunning-Krueger and its opposite, impostor syndrome, are well known phenomena. Facts are less malleable. I cannot choose to believe I am the prima ballerina at the Bolshoi.
In addition, while there is a strong, direct correlation between practising and gaining skill in ballet, there is none between researching a topic and forming a belief. This is why scholars who have dedicated their lives to complex subjects often still disagree fiercely among themselves. Research may cause you to believe something that appeared counterintuitive at first glance, when presented with watertight proof. But only if you encounter such evidence and it convinces you. A deep dive into, for example, 9/11 truther theories or homeopathy—or, indeed, the notion of doxastic voluntarism itself—may even leave you less convinced than before. You may never be able to persuade yourself that the world is flat or that leprechauns exist by sheer dint of effort.
Philosophers disagree, however, on whether any belief can be consciously chosen. As Bernard Williams explains: “With regard to no belief could I know—or … in full consciousness, even suspect—that I had acquired it at will.” If I am cognisant that the only reason I believe x is because I arbitrarily decided to, then I also know that I have no strong reasons for believing x. I therefore don’t believe x: I just think it sounds like a nice idea. For example, I hope Homo sapiens is not the only higher order sentient species in the universe. But, in the absence of convincing proof either way, I cannot form a belief that aliens exist, no matter how much I long for First Contact.
Jonathan Bennett suggests a solution to this seeming impossibility in a thought experiment involving a society of amnesiac voluntary believers:
Credam is a community each of whose members can be immediately induced to acquire beliefs. It doesn’t happen often, because they don’t often think: ‘I don’t believe that p, but it would be good if I did.’ Still, such thoughts come to them occasionally, and on some of those occasions the person succumbs to temptation and wills himself to have the desired belief. … When a Credamite gets a belief in this way, he forgets that this is how he came by it. The belief is always one that he has entertained and has thought to have some evidence in its favour; though in the past he has rated the counter-evidence more highly … [now] he wills himself to find the other side more probable. After succeeding, he forgets that he willed himself to do it.
Credam, clearly, is planet Earth. We are all susceptible to confirmation bias and wishful thinking. The examples that support the conclusion we wish to draw spring more readily to mind, the evidence that confirms it seems more weighty. We overestimate our own rationality whenever our emotions are involved. In The Happiness Hypothesis, Jonathan Haidt describes reason as the rider perched atop the elephant of emotion:
I [am] a rider on the back of an elephant. I’m holding the reins in my hands, and by pulling one way or the other I can tell the elephant to turn, to stop, or to go. I can direct things, but only when the elephant doesn’t have desires of his own. When the elephant really wants to do something, I’m no match for him.
However, this does not mean that we can decide what we believe. In Haidt’s vision, we don’t deliberately choose our beliefs to suit our feelings: we are swayed towards specific beliefs by those feelings without our knowledge—and even against our will (when trying to discover what is objectively true, we cannot help being influenced by our emotions). Often, I think I am guiding the elephant—but in fact I am an incompetent mahout, astride a beast who is lumbering off in whatever direction he pleases. I’m lucky he hasn’t unseated me altogether.
At other times, however, I would love to simply be carried along for the ride, but I can feel reason taking hold of the reins against my wishes. This is the phenomenon we know as cognitive dissonance, when we cling to beliefs that are convenient or flattering, even as it begins to dawn on us that they are false. We attempt to deny the truth, we protest too much, we want to be lulled by a comforting delusion but it usually proves impossible. This can lead, at best, to pesky niggling doubts. If those doubts shake our fundamental worldview, they can cause existential despair. “Humankind,” as T. S. Eliot warns, “cannot bear too much reality.” Gerard Manley Hopkins vividly describes his own crisis of faith:
No worst, there is none. Pitched past pitch of grief,
More pangs will, schooled at forepangs, wilder wring.
Comforter, where, where is your comforting?
Mary, mother of us, where is your relief?
My cries heave, herds-long; huddle in a main, a chief
Woe, world-sorrow; on an age-old anvil wince and sing
—Then lull, then leave off. Fury had shrieked “No ling-
ering! Let me be fell: force I must be brief.”
O the mind, mind has mountains; cliffs of fall
Frightful, sheer, no-man-fathomed. Hold them cheap
May who ne’er hung there. Nor does long our small
Durance deal with that steep or deep. Here! creep,
Wretch, under a comfort serves in a whirlwind: all
Life death does end and each day dies with sleep.
If we had a choice as to what to believe, who would choose that?
Boghossian and Boudry both agree that some propositions cannot be believed at will. The two philosophers have never met in real life and it is impossible for Boghossian to decide to believe that they had a few beers together in Dresden last year. However, Boghossian argues that, under certain circumstances, voluntary beliefs are possible.
Religious Belief
Boghossian first began to question whether belief is always involuntary when a devout Christian student told him “that he chose to not believe in the Neanderthals” because their existence contradicts the Genesis creation story. Boudry argues—and I agree—that the student was simply weighing up two mutually incompatible sources of evidence and deciding to credit one over the other. If you believe the Bible is literally true, it is rational to regard any evidence that seems to contradict it as fabricated. Belief in the Bible itself is not voluntary, as Boudry points out: “People cannot simply choose to believe that some book is the Word of God. Even with a gun put against your head, you couldn’t bring ourselves to believe something like that, through sheer willpower.” This highlights an ethical problem at the core of most religions: they demand genuine belief in a god or gods (not just lip service and ritual observance, which, on its own, may even be considered hypocritical). If belief is not a choice, this is an impossible demand.
To many lifelong atheists, the phenomenon of religious (here, Christian) belief is difficult to fathom. Belief in miracles is especially hard to explain. As David Hume recognised, such belief is based on arguments from authority: priests report it as true and ask us to take it on trust; this trust is derived in turn from the testimony of a very old book (Genesis), whose author is unknown and which contains assertions that contradict the laws of nature. Hume concludes that it should be impossible to believe an account of this kind: “no testimony is sufficient to establish a miracle, unless the testimony be of such a kind, that its falsehood would be more miraculous, than the fact, which it endeavours to establish.” This leaves us with a puzzle: how do people believe in talking snakes, seven-day creations etc.? Our inability to answer this need not mean, however, that believers simply opt for belief voluntarily. Those who have had religious epiphanies generally do not describe them in terms of willpower: they see themselves as the lucky passive recipients of revelation and as having entered into a relationship with something or someone else that exists out there, independent of them.
Wishful Thinking
Boghossian’s second example of voluntary belief is drawn from Aesop’s fable of the fox who, frustrated by grapes that are just out of his reach, “walked away with his nose in the air, saying: ‘I am sure they are sour.’” In cases of unrequited love, Boghossian suggests, “the same principle is sometimes at play” when the jilted person asserts “I’m too good for her anyway.” This may be simply posturing and bravado. But if it is a case of a real change of mind, is it a type of deliberate wishful thinking?
Boudry asks, “is it true that, at least some of the time, people can simply choose to believe something because it’s comforting to them, alleviating their guilt, boosting their self-esteem?” Under two circumstances, he suggests, we have some partial control: if the belief concerns something patently subjective, or if there is a “certain ring of plausibility” to our interpretation. The protestations of both the fox and the rejected lover are assessments of how they feel about their failures to achieve their goals, masquerading as judgements about the objects of their desire (grapes and girl). And, as Boudry explains, “internal psychological states are somewhat ambiguous and open to self-manipulation.” How we feel is ever changing and hard to pin down: we can sometimes convince ourselves to feel differently both because it is so often impossible to define our current emotions and because, given several plausible possibilities, our feelings are influenced by which we choose to concentrate on. As Jon Rosen explains, “one can also exercise one’s epistemic freedom insofar as one can suspend judgment on a given matter and then decide where or how to focus one’s attention on items of evidence.”
Some of Boghossian’s other examples illustrate how we can deliberately adopt a specific focus. Seasoned MMA fighters “often convince themselves that they will win—even if, or maybe even especially if, they’re fighting against odds.” A torture victim trapped in a dungeon can, Boghossian argues, persuade himself that help is on its way. The mother of a dead child can believe that her daughter is now in heaven. In all these cases, people are faced with two scenarios, both of which are possible—though not equally probable. We boost our confidence, stave off despair, comfort ourselves by fixing our attention on the better of the two. In this murky psychological terrain, a combination of intense hope and stubborn focus can stand in for certainty, but it is not identical to unequivocal belief. Doubt always remains. A fighter who truly considered himself invulnerable would underestimate his opponents and be easily beaten. A mother who really felt her child was in a happier place would not mourn her with such intensity.
Boghossian highlights two other cases of wishful thinking. The first, borrowed from Carl Ginet, is that of the man who decides to believe, halfway through a long road trip, that he did lock his front door before leaving. This is perhaps simpler than it appears. Most of us have routines we carry out before leaving for a trip. Habits of this kind often run on autopilot and we know that, in such cases, not having a conscious memory of having performed an action does not mean we failed to carry it out. If your conscious mind is focused on the article you need to write or on the argument you had with your boyfriend last night, you may reach work with no clear memory of the drive: no recollection of having flipped on your indicator light and looked in the side mirror and over your shoulder before changing lanes on the motorway, or of having braked and shifted down a gear as you took that last corner. We rely on procedural memory—and it is usually safe to do so. Ginet’s traveller knows he probably locked the door, which is enough to stave off anxiety most of the time, in all but the most neurotic.
The second case Boghossian presents is that of “choosing not to believe something about someone when you’re in love with them”: specifically, a husband may refuse to believe that his wife is having an affair, despite all the tell-tale signs. This is also about a weighing up of probabilities: your wife’s trustworthiness against that unexplained credit card bill you found. We are extremely bad at accurate assessment when we are emotionally invested in a result. Depending on personality type, we can err on the side of credulity or paranoia.
We do not always simply choose the belief that would be most comforting and refuse to investigate further. The jealous husband roots through his wife’s handbag. The mother of the abducted child could find closure if she knew her daughter were dead. The cancer patient begs his doctor to be honest with him. This is unsurprising. It is impossible to firmly hold a belief if you know there is easily available evidence that might disprove it. Ignorance may be bliss, but only when you don’t know that you could dispel it at any time. Wishful thinking is only effective when we are able to trick ourselves. As Boudry puts it, “The second condition for successful ‘wishful thinking’ is that you should not be aware that you’re about to engage in wishful thinking. A wish can spawn a belief if it’s working its magic just outside your conscious attention.”
Make-Believe
There is a special case in which we can comfortably entertain beliefs, while certain they are false: through what is tellingly called make-believe. We can participate in this actively—by playacting, telling stories or fantasising—or more passively—by watching a favourite Netflix series, for example—and even involuntarily—through dreams. Our emotional investment in such fictions can be huge that they may at times prompt more intense feelings than real life. Noting that his mistress is unmoved by his devotion, but weeps over a story, Astrophil urges Stella to
think, my Dear! that you in me do read
Of lovers’ ruin, some sad tragedy.
I am not I, pity the tale of me!
We try to heighten the illusion: we avoid spoilers, as if the fictional story unfolded in real time; we discuss the fates of characters after the book has finished, as if they existed outside its pages; we talk about actors as if they turned into their characters, as Emma Donoghue has pointed out: Rose Leslie is a wilding from beyond The Wall; Jim Carter is a butler; Tom Cruise is a pilot. But this is crucially distinct from literal belief. Even children at play know that their games are not real, as Golomb and Kuersten’s experiments have shown. Kids who were baking with Play-Doh were astonished when a researcher took a real bite out of one of the pretend cookies.
We treat these worlds of our own creation as if they were real, even when they involve the far-fetched or the supernatural. A young wizard feels like one of us; we cry over the death of a hermaphrodite on a distant ice planet. How precisely this works is unknown. Coleridge’s famous description makes our emotions sound like a fund we can dip into, fungible as money, something we can lend a fiction to allow it to purchase what it needs to seem real. He writes, we “transfer from our inward nature a human interest and a semblance of truth sufficient to procure for these shadows of imagination that willing suspension of disbelief for the moment, which constitutes poetic faith.” Unless we are suffering from a severe mental illness, however, we never lose our grip on reality—which is how we are able to enjoy fictional depictions of even horrifying events. Samuel Johnson describes this vividly:
[Drama] is credited, whenever it moves, as a just picture of a real original; as representing to the auditor what he would himself feel, if he were to do or suffer what is there feigned to be suffered or to be done. The reflection that strikes the heart is not, that the evils before us are real evils, but that they are evils to which we ourselves may be exposed. If there be any fallacy, it is not that we fancy the players, but that we fancy ourselves unhappy for a moment; but we rather lament the possibility than suppose the presence of misery, as a mother weeps over her babe, when she remembers that death may take it from her. The delight of tragedy proceeds from our consciousness of fiction; if we thought murders and treasons real, they would please no more.
When producing and consuming fiction, we manipulate our beliefs freely, consciously and at will. This is perhaps one of the main reasons we relish it so much. But this is the only time we are able to do so.
Facts Don’t Care About Your Feelings
Boghossian lists a number of circumstances that facilitate believing something we know to be untrue. He argues that this is easier if the beliefs are “deeply validating (healthy at any size), or profoundly alleviating (profession of white guilt).” In addition,
Thinking oneself into believing that something is true is far easier when others believe it, when one thinks one is a better person for holding the belief, and when loneliness or social acceptance are consequences for either denying or professing belief.
These are excellent reasons to pretend to hold or wish to hold a belief. Daniel Dennett describes the phenomenon of approving of a belief to which one does not actually subscribe as belief in belief and cautions that it is very difficult to tell apart from the real thing:
the actions motivated by believing in belief (while not actually believing …) are … well-nigh indistinguishable from the actions of genuine believers: say the prayers, sing the hymns, tithe, proclaim one’s allegiance, volunteer for church projects, and so on. Sometimes I wonder if even 10% of the people who proclaim their belief in God actually do believe in God.
This does not just apply to faith in God. People may feign belief in a political ideology to be accepted by a group or stubbornly repeat confidence-boosting mantras at the bathroom mirror in a vain attempt to feel better about the 10 kg they’ve gained. But these extrinsic reasons for believing something, as Pamela Hieronymi calls them, are not persuasive: “you will not, by finding convincing reasons that you take only to show that believing p is worth doing, therein become committed to the truth of p.” It would be nice if x were true is not a cogent argument for the truth of x.
Why can I not be persuaded to believe something by the argument that it would benefit me to believe it? Because there is an external reality out there and that reality is unaffected by my wishes, hopes and dreams. The universe is not arranged to suit my personal convenience. Whether something is true is quite independent of whether I would like it to be. Humans have always resorted to magical thinking and superstition in times of desperation in an attempt to convince ourselves that we can exercise control over our unpredictable world—but, most of the time, most of us know we cannot.
This is almost certainly the result of our evolution. As Oliver Traldi has argued on our podcast, Two for Tea, Jordan Peterson is wrong in his hypothesis that there is a Darwinian truth, which differs from reality and instead constitutes “whatever is useful to us.” There is no biological advantage to being delusional. As Boudry points out, if you were to don a pair of the Joo Janta 200 Super-Chromatic Peril Sensitive Sunglasses, described in The Hitchhiker’s Guide to the Galaxy, which turn opaque in the presence of danger, you would not last long. It’s a rough universe out there. As Boudry explains, “Evolution doesn’t care about our sense of comfort. It wants us to pay attention to reality, even (I’d say especially) when reality is terrifying.” Even at play, we do not deactivate what Neil van Leeuwen calls our “continual reality tracking”—for good reason. “One’s factual belief that no hyenas are near must vanish on seeing fresh hyena tracks, on pain of being lunch.”
Ultimately, then, our ability to control what we believe is very limited. And this is a good thing. It ensures our survival and flourishing. As Boudry argues here, “we all need comfortable illusions to get by … [but] Sooner or later, they will collide with reality because reality couldn’t give a damn about your illusions.”
Letter is a digital platform for one-on-one correspondence. It combines the intimacy of letter writing with the convenience of an online format and the added value that conversations can be publicly read and shared. Our subeditor, Iona Italia, works with Dayne and Clyde Rathbone to make this possible. To find out more, go to www.letter.wiki or contact humans@karma.wiki.
7 comments
In the short life we live, beginning in the most formative years of childhood, we become indoctrinated into belief systems our parents, guardians, and the larger cultural society which surrounds us teaches us is the “truth” about life. Because this continues to occur in familial, cultural surroundings at a tender age and throughout many generations is the main reason it is impossible for most adults to change their beliefs. It’s an ongoing condition that seems to never end despite the numerous valid and really amazing scientific discoveries which show how much more amazing and precious this planet and our lives really are compared to what ancient men wrote to explain things that seemed real in their time.
It is inconceivable why children are taught religious beliefs when their brains have not yet fully developed reasoning capabilities. Richard Dawkins’ book Science of the Soul has a chapter in which he discusses this important concern and feels as many others do to wait until children are fully grown to begin study of religious dogma as well as studying various other kinds of cultural religious belief systems, if they so choose.
It seems all religions and certain other kinds of belief systems rely on monetary donations to exist and are keen on the generational approach of the indoctrination of children because that’s the seed to change or extract later in life especially when fear is also taught to not question and remain dutifully faithful to ancient stories which no longer have merit in the way the world has evolved today.
Can we make ourselves believe? Our brain is constantly tabulating and making our reality, what we believe, but it’s information is fallible—not always real or the truth. In many ways, we remain children if we don’t take the time and effort to understand most beliefs were formed in childhood and deserve questioning, a re-evaluation.
Is it too scary or improbable to believe that the god made up by ancient men is fiction and that something else is going on but we just don’t have the answers yet?
In an article published a number of years ago, I argued that people choose to believe by choosing to involve themselves in social groups that collectively produce and reinforce the desired beliefs. Below is the abstract &URL for online copy:
“The concept of the religious economy has been one of the most useful contributions of rational choice theories to the sociology of religion. However, this study argues that religious belief presents a problem for rational choice theories, since it is difficult to see how one can freely choose what one believes to be true in the sense that one can freely choose what consumer products one wishes to purchase. After examining the problem, the study suggests that it may be addressed by thinking of belief as a socially, collaboratively produced good. Given demand for a particular belief, potential religious consumers choose to involve themselves with those who are collectively producing it through interactions of faith. The involvement turns potential religious consumers into actual consumers by enabling them to participate in networks that establish beliefs as true.”
https://www.researchgate.net/publication/261994792_Rationality_Choice_and_the_Religious_Economy_The_Problem_of_Belief
Interesting, thanks! Will check it out.
Something went wrong with the formulation of the problem. The statement “one can only believe what one thinks is true” is tautological because the intension of “thinks” in this context must be the same as “believe.” Moreover, the word “true” is redundant in this formulation because belief is a disposition or attitude toward a proposition (google “propositional attitudes”). In other words, when I say “I believe that p is true,” I’m making a statement about my state of mind that says nothing about the truth of p. With the substitution made and the redundancy cut, we get the statement “one can only believe what one believes.”
As for willing or choosing to believe or disbelieve a proposition, you’ve overlooked the fact that skeptics have been doing it for centuries. When the skeptic suspends or reserves judgement about the truth of a proposition, he is choosing a disposition toward some p. Of course, the skeptic’s attitude is merely a philosophical way of doing what people do every day when they resist accepting another’s claim with “I don’t know about that, Jack.”
I was surprised when all this talk about believing the unbelievable circled around to religion! (No, not really, I saw where it was going from the start.) Anyway, I can’t speak for all Christians, but Catholicism doesn’t require “genuine belief” (admittedly I’m not sure exactly what that means) in God, but only faith. A little Augustine or Thomas would show how crude the characterization is. Come to that, there’s a long tradition within Christianity discussing the absurdity of believing in the God of the Bible.
But I’m more interested in the Humean bit about arguments from authority that “contradict the laws of nature” because it points to a far more pertinent and consequential problem in our time that has persisted at least since Plato’s time: the confusion between belief and knowledge. The vast majority of what you and I know is not true and justified belief, but belief taken on authority. To put a sharp point on it, you likely consider your belief that the Earth goes round the Sun to be knowledge, though I strongly suspect it’s not. The heliocentric solar system may be true—I’m not saying it isn’t—but I’m betting you don’t know it to be the case—meaning, if someone asked you to prove it, you couldn’t. Like most people and like most of your “knowledge” about the very laws of nature you use to judge beliefs, you’ve taken it on authority.
There seem to be a few misunderstandings here.
“when I say “I believe that p is true,” I’m making a statement about my state of mind that says nothing about the truth of p.”—Yes. People can have erroneous beliefs. This is not relevant to the question the essay examines, which is whether they can freely choose to believe something knowing it to be erroneous. Obviously, we lack a word to describe this type of belief, but that doesn’t necessarily mean the phenomenon does not exist.
“As for willing or choosing to believe or disbelieve a proposition, you’ve overlooked the fact that skeptics have been doing it for centuries.”—No, this isn’t willing or choosing to disbelief, it’s, as you point out, suspending judgement i.e. not coming to a decision one way or the other. This is also not relevant to the central question.
“Catholicism doesn’t require “genuine belief” (admittedly I’m not sure exactly what that means) in God, but only faith”—What’s the difference?
“the confusion between belief and knowledge. The vast majority of what you and I know is not true and justified belief, but belief taken on authority”—There’s no confusion here. Authority, i.e. accepting things as true because you trust the expertise of those proposing them or you trust the scientific consensus is a valid reason for believing something. I believe all kinds of things that way (mostly in the scientific and technical domains). This is not the same as voluntarily choosing to believe something DESPITE knowing that it is not true. It’s not relevant to the argument here, either.
You seem to think I am claiming that everything people believe is true. I’m not. This isn’t about truth. It’s about how belief works and our attitudes towards it.
PS “Like most people and like most of your “knowledge” about the very laws of nature you use to judge beliefs, you’ve taken it on authority.” You seem to be confusing me with Hume, which is flattering but a little odd. This article does not make any judgements whatsoever about beliefs. It examines the question “Can we believe something we know to be untrue?” You may not be interested in this question. If so, this isn’t the article for you.
People can have erroneous beliefs. This is not relevant to the question the essay examines, which is whether they can freely choose to believe something knowing it to be erroneous. Obviously, we lack a word to describe this type of belief, but that doesn’t necessarily mean the phenomenon does not exist.
It is relevant, and it’s not that we don’t have words for such things; it’s that you’re either looking at it the wrong way or you’re using the wrong words to define what you’re getting at. Here’s why I suggest the first. In epistemology, propositional attitudes like believe that, think that, and know that all denote the same thing (they’re interchangeable, salva veritate): A state of mind in relation to a proposition. The only differences are matters of degree of belief (e.g., subjective probabilities). So, for someone to know that p and believe that not-p is a contradiction because, again, knowing that and believing that mean the same thing. You have to be using know that, believe that, or both in a different way.
Which brings us around to the second possibility: You’re using the wrong words. Someone can know that p and wish that not-p, hope that not-p, aspire to a world where not-p, and so on without contradiction. But these are different attitudes, and we engage in them all the time. We can also compartmentalize our beliefs to avoid dissonance—or, as in the case of Boghossian’s student perhaps, we can adopt the doctrine of double-truth whereby we assume that the conflict between two propositions is only apparent, and we reserve judgement without prejudice to either one. Lest you think this a religious thing, recall that it happens all the time in disciplines from physics (e.g., holding out for a unified theory) to history (e.g., the cause of the fall of the Roman Empire).
[Skepticism] isn’t willing or choosing to disbelief, it’s, as you point out, suspending judgement i.e. not coming to a decision one way or the other. This is also not relevant to the central question.
Suspending judgement is indeed a propositional attitude. I do not know that p is as much a statement about my belief in a proposition as I know that p. Besides, without a lower limit to on degrees of belief (i.e., an opposite of certainty), degrees of belief (subjective probabilities) wouldn’t be possible.
“Catholicism doesn’t require “genuine belief” (admittedly I’m not sure exactly what that means) in God, but only faith”—What’s the difference?
I pointed it out above, and you acknowledged it too—call it something like trust, which is not easily distinguished from wishful thinking. I note, however, that human beings seem to engage in it while simultaneously pretending that they don’t.
Authority, i.e. accepting things as true because you trust the expertise of those proposing them or you trust the scientific consensus is a valid reason for believing something. I believe all kinds of things that way (mostly in the scientific and technical domains). This is not the same as voluntarily choosing to believe something DESPITE knowing that it is not true. It’s not relevant to the argument here, either.
Trust is a new propositional attitude in this discussion. I’m glad you attributed it to your own beliefs because it shows epistemic humility, a virtue in my books. What a world it would be if more people had a little more of it.
It seems you’re broadly in agreement with me that Peter is wrong in his conjecture. However, I don’t think this is because he’s using the wrong terminology.
I’m not sure why you seem obsessed with religion. I’m not an atheist. You are barking up the wrong tree there.
Thanks for reading and engaging as always! x