Is diamond harder than steel? Does 2 + 2 = 4? Sure. But there are some statements that we can’t evaluate as either true or false. Some of them are obviously meaningless (for example, the present king of France is bald or colourless green ideas sleep furiously). But others may be experienced as meaningful even though, on closer analysis, they cannot be objectively evaluated as true or false. These include assertions about ethics (for example, rich people deserve their wealth) and obscurantisms (for example, Deepak Chopra’s pronouncement that attention and intention are the mechanics of manifestation). Some philosophers, most notably the logical positivists, referred to such statements as “meaningless,” “nonsense” or “pseudo-statements.” It is easy enough to see that it is meaningless to talk about colourless green ideas or the present king of a republic, but there are meaningless statements that look meaningful at first, and that require effort to be unmasked as meaningless.
Recognising Pseudo-Profound Bullshit
In recent years, some philosophers and psychologists have settled on a technical term to describe meaningless statements that are intended to sound meaningful: they simply call it bullshit. In a 2015 study, “On the reception and detection of pseudo-profound bullshit,” researchers tested participants’ reactions to a variety of statements. Some (such as “attention and intention are the mechanisms of manifestation”) were made by the New Age guru Deepak Chopra, who has a reputation for saying things that sound profound without actually meaning anything. Others had been randomly generated by online tools (such as “Wisdom of Chopra” and “New Age Bullshit Generator”) and were designed to parody such pronouncements, while others were unrelated to Chopra, but were chosen because they sounded somewhat inspirational (such as, “A wet person does not fear the rain”), and some that did not sound at all inspirational (such as “Newborn babies require constant attention”).
They asked participants to rate how profound the various statements were in order to measure what they called people’s “receptivity to bullshit.” They defined “bullshit” as statements that are perceived as meaningful even when they are not: “Although [bullshit] may seem to convey some sort of potentially profound meaning, it is merely a collection of buzzwords put together randomly in a sentence that retains syntactic structure.” And they defined “receptivity to bullshit” as the propensity to rate randomly generated nonsense statements as being just as profound, or even more profound, than meaningful statements.
Those who rated real Deepak Chopra statements as profound also tended to rate the randomly generated statements as profound, while those who rated the Chopra statements as low in profundity tended to rate the random sentences as low in profundity. (To be fair, the authors acknowledged that they deliberately chose to use the statements on Chopra’s Twitter feed that they thought sounded the most like bullshit.)
A more specific class of bullshit statements is what Daniel Dennett calls “deepities”: “A deepity is a proposition that seems both important and true—and profound—but that achieves this effect by being ambiguous. On one reading it is manifestly false, but it would be earth-shaking if it were true; on the other reading it is true but trivial.” Dennett gives the example of “Love is just a word.” In a trivial sense, this is, of course, true. Love is a word: it has four letters, rhymes with dove, etc. In another sense, though, it seems deep, but it is actually false. Love is not just a word: it also denotes an emotion, a state of mind, an attitude towards a person, animal, place, etc.
While the study on bullshit described above focused on New Age pseudo-profundity, the phenomenon is not limited to spiritual or religious pronouncements. Many seemingly profound statements that are common in secular political discourse are also meaningless. I call them secular superstitions.
Assessing Truth Value: The Rise of Logical Positivism
In nineteenth-century Europe, most philosophy taught in universities was dominated by the work of theistic thinkers, and even though many of them did not write in explicitly theistic terms, mysticism still pervaded their work. However, in the early twentieth century, as scientific thinking became more dominant in academia, many started to find it difficult to reconcile such thinking with the mystical metaphysical concepts of prominent Christian philosophers such as Hegel, known for his philosophy of absolute idealism. Indeed, today, some of Hegel’s writing can sound like a nineteenth-century version of Deepak Chopra. For example:
The Beautiful is the expression of the absolute Spirit, which is truth itself. This region of Divine truth as artistically presented to perception and feeling, forms the centre of the whole world of Art. It is a self-contained, free, divine formation which has completely appropriated the elements of external form as material, and which employs them only as the means of manifesting itself.
As younger, more scientifically minded philosophers entered academia, a generational clash resulted. To them, most writings by Hegel and other metaphysicians, although poetic, seemed devoid of meaning. In response, some of them developed the philosophy of logical positivism (also known as empiricism)—a movement that tried to approach philosophy as scientifically as possible. The logical positivists were inspired by the work of David Hume, who writes in An Enquiry Concerning Human Understanding (1748):
If we take in our hand any volume; of divinity or school metaphysics, for instance; let us ask, Does it contain any abstract reasoning concerning quantity or number? No. Does it contain any experimental reasoning concerning matter of fact and existence? No. Commit it then to the flames: for it can contain nothing but sophistry and illusion.
Building on Hume’s observation, logical positivists argue that, for a statement to be considered meaningful, it must be either “analytic” (an a priori logical or mathematical truth such as “2 + 2 = 4” or “all bachelors are unmarried men”) or empirically verifiable. This became known as the principle of verification, and it was so influential that it created a schism in philosophy. Analytic philosophy, on the one hand, embraced the verification principle and rejected idealism. Continental philosophy, on the other hand, largely ignored the logical positivists. Analytic philosophy became the predominant form of philosophy in Austria, then the UK, then in the US and the rest of the English-speaking world, while continental philosophy remained associated with the rest of continental Europe, particularly France, where it eventually gave birth to postmodernism.
Despite the appeal of logical positivism to many philosophers, it has a couple of flaws. For one thing, the statement on which it is based (“only analytic or empirically verifiable statements are meaningful”) is itself neither analytic nor empirically verifiable—and therefore is arguably meaningless on its own terms. For another thing, the logical positivists found that, by their standards, only scientists make statements that can be considered meaningful. Their standards were less useful when applied to the branches of philosophy that explore value judgments about morality and aesthetics. Statements that a certain action is immoral—or that a certain work of art is unaesthetic—cannot be empirically verified. Thus, even in the English-speaking world, philosophers continued to teach and write about ethics and aesthetics more or less as they had done before.
In 1951, the American philosopher Willard Van Orman Quine published a critique of logical positivism, “Two Dogmas of Empiricism,” which many philosophers consider to have been the beginning of the end for that movement. In the wake of his essay, throughout the 1950s, theories of meaning that focused on the performative function of statements gained increased attention. For example, “Get off my property!” or “I hereby declare you husband and wife” are neither analytic nor empirically verifiable statements, but they are still meaningful as speech acts. In 1967, the philosopher John Passmore famously declared logical positivism “dead, or as dead as a philosophical movement ever becomes.”
From Logical Positivism to Scientific Falsifiability
Although logical positivism petered out as a general philosophical movement, one of its core precepts has survived in the philosophy of science, where it has been reframed as the principle of falsifiability. According to that principle, as Karl Popper explains in The Logic of Scientific Discovery, a scientific experiment should not be seen as an attempt to verify or confirm a hypothesis, but rather to falsify it. Thus, for a scientific hypothesis to be valid, it must be stated in such a way that it can potentially be falsified by empirical evidence. On this principle, if no evidence has been found that a hypothesis is false, it is provisionally assumed that it may be true, unless and until evidence is produced through the scientific method suggesting that it is false. And even if there is evidence that a hypothesis is true, it is only provisionally assumed to be true: there is always the possibility that further evidence will show it to be false. That said, some hypotheses have accumulated enough evidence in their favour, and are intertwined closely enough with other hypotheses for which there is strong evidence, that they are very unlikely to be falsified. But we can never rule out the possibility that they will be. Note that a claim need not be falsifiable in practice, only framed in such a way that it could be falsified in principle. For example, the statement that “there are no dragons in the Milky Way” is falsifiable in principle. Even though humans will probably never develop a fleet of space warping spaceships, visit all the planets in the Milky Way, and search them for dragons, it is still clear what the statement means. The principle of falsifiability—as applied to science—is now widely accepted. For example, in McLean v. Arkansas, the Pennsylvania Supreme Court ruled that so-called creation science could not be taught as “science” in Pennsylvania public schools because its tenets are not falsifiable.
Should the Principle of Falsifiability be Applied to Moral Claims?
Logical positivism lost steam because, by its lights, some branches of philosophy that many consider valuable, such as moral philosophy, had to be considered meaningless. Its surviving principle, falsifiability, has gained wide acceptance partly because it has been applied only to scientific propositions, and not, for example, to ethical propositions. But arguably, we should also respect scientific principles when talking about morality. I believe that it is possible to apply the principle of falsifiability to morality without rendering moral philosophy meaningless, and therefore propose that we should agree to make only empirically falsifiable moral claims. After all, applying the principle of falsifiability to science has had beneficial practical effects: it has enabled us, for example, to cure smallpox and visit the Moon. My hypothesis is that applying it to moral propositions would enable us to communicate and cooperate more effectively. (Note that my hypothesis itself adheres to the principle of falsifiability: its accuracy could be tested through scientific experiment.)
I. J. Ayer (1910–1989), one of the great logical positivist thinkers, originated the idea that statements about ethics are not factual propositions, but expressions of emotional attitudes (an idea known as emotivism). For example, even though burning children alive causes a lot of suffering, the statement “burning children alive is wrong” cannot be evaluated as true or false; rather, it is an expression of disapproval. Nor, according to Ayer, is it true or false to say that something that would cause only happiness is good: that is merely an expression of approval. He writes, “It is not self-contradictory to say that it is sometimes wrong to perform the action which would actually or probably cause the greatest happiness.”
But Ayer is using a definition of the word wrong that not everyone agrees on. According to utilitarian philosophers’ understanding of the word wrong, to call an action “wrong” if it would (or probably would) cause the greatest possible happiness is a contradiction in terms. Ayer seems to have thought that, if philosophers disagree about the definition of the word wrong, there is no way their disputes can be settled. But I believe there is.
Ayer’s mistake was to assume that moral philosophy must be reduced to an attempt to discover whose definition of right and wrong is correct in some absolute metaphysical sense. It’s true that, historically, many philosophers have treated ethics as a metaphysical question—and have therefore concluded that moral propositions are unfalsifiable. But it doesn’t have to be that way. Rather than dismissing moral philosophy, we can stop trying to see it as something metaphysical, and instead see it as a practice intended to clarify and standardise the language we use to talk about morality, allowing us to communicate more effectively and reach agreement more easily. What would such an optimised moral language look like? As I will show, it would necessarily contain only moral propositions that are falsifiable. And that would require recognising and letting go of our secular superstitions—including the widely accepted ideas of natural rights and moral desert and the idea that what is morally permissible can be objectively determined.
There Is No Such Thing as Natural Rights
The phrase natural rights refers to the philosophical proposition that people have certain rights even if their government or culture doesn’t recognize those rights. For example, if a woman has a natural right to choose abortion, then, when she asserts that it is her right, she is not merely saying “I should have this right.” She is purporting to make a statement of fact. The idea is that one can have a natural right to do something without having a legal right to do it.
The idea of natural rights is usually credited to the Christian theologian Thomas Aquinas, who claimed that we know intuitively what’s right and wrong because God has instilled this knowledge in us. As he puts it, “The natural law is promulgated by the very fact that God instilled it into men’s minds so as to be known by them naturally.” Although this is an unfalsifiable claim, the idea of natural rights is unlikely to be abandoned: it has a long history in philosophy and is enshrined in the United Nations’ influential 1948 Universal Declaration of Human Rights. But the idea of natural law is not the only philosophy that can be used to justify the instantiation of human rights in law worldwide. From a utilitarian perspective, human rights have extremely high instrumental value as legal institutions because they contribute to the minimisation of human suffering.
There Is No Such Thing as Moral Desert
Claims about desert are familiar and frequent in ordinary non-philosophical conversation. We say that a hard-working student who produces work of high quality deserves a high grade; that a vicious criminal deserves a harsh penalty; that someone who has suffered a series of misfortunes deserves some good luck for a change.— Fred Feldman and Brad Skow, 2020
A belief in moral desert requires a belief in free will, a problematic concept that many scientists claim to be an illusion. Most of the time, we feel like agents operating outside the chain of cause and effect that links all events in our universe and believe that we can freely choose how to steer our lives. But this subjective experience is not confirmed by what we know about how nature works. Our wishes, thoughts and personalities are just as much outside our control as our genes are.
Some fear that abandoning the belief in free will compels a kind of moral nihilism. They worry that, if nobody can rationally be blamed or credited for their actions, then everything must be permitted and there can be no grounds for punishing or rewarding anybody for any reason. But that worry is not well founded. As I (and many others) have argued, rewards and punishments influence people’s behaviour, and thus still have instrumental value.
Nevertheless, a consequentialist approach to reward and punishment that focused on minimising human suffering while omitting the ideas of praise and blame would arguably require some changes to our current practices. For example, many consequentialists have proposed that the goal of punishment shouldn’t be to inflict suffering for its own sake, but only to protect victims and deter people from engaging in bad behaviour. This means we should shift from a retributive justice system to one more based on restorative justice.
On the same reasoning, rewards should be tailored to the goal of encouraging good behaviour. An entrepreneur who creates something of value to society should be rewarded to the extent necessary to motivate others to follow in her footsteps. But when a billionaire makes yet another billion dollars, the extra happiness it may give her is negligible compared to the good that could be done by using that money to reduce the suffering of others.
There Can Be No Objective Test for What Is Morally Permissible
Many moral philosophers categorise behaviours as either morally permissible, morally impermissible, or morally obligatory. Doing something innocuous, such as eating an apple, is generally considered morally permissible; doing something very harmful, such as burning children alive, is generally considered morally impermissible, and in some circumstances, doing something to prevent immediate and clear harm, such as helping a drowning child, is generally accepted to be a moral obligation. But, as I have argued elsewhere, there is no clear boundary between behaviours that are morally permissible, impermissible and obligatory, just as there is no clear boundary between tall and not tall or bald and not bald. Most people agree on how to characterise examples at the extremes: Michael Jordan is tall; Jeff Bezos is bald; burning children alive is impermissible. But people often disagree about how to characterise examples at the boundary—whether, say, Joe Biden is tall, Bernie Sanders is bald or donating to a foreign charity is morally obligatory. There is no meaningful way to adjudicate such disagreements, because tallness, baldness and morality are examples of what philosophers call vague predicates—terms that cannot be precisely defined because of what is sometimes called the sorites paradox.
That said, morality need not be deemed entirely subjective, because philosophers can reasonably posit that the goal of morality is to minimise human suffering. In that case, in most circumstances, donating $100 to an effective charity can be considered objectively more moral than donating $50. Nor do philosophers need to agree on clear boundaries between moral and immoral actions: they can leave those decisions to policymakers and cultural conventions. In general, it will be easier for societies to agree on what is moral or immoral at the extremes than at the boundary. For example, it is easy to agree on a strict prohibition against rape, because most people agree that this behaviour is unambiguously harmful. By contrast, it is harder to get people to agree on a strict prohibition against slaughtering animals for food, since this is a widespread cultural practice with deep historical roots.
In these boundary cases, it is probably more effective to use persuasion rather than prohibition. For example, some might argue that describing meat consumption as morally impermissible could be socially useful—even if the statement is meaningless in a strict philosophical sense—because it could help convert meat-eaters to veganism. However, using philosophically meaningless language as a means to an end is manipulative, and shaming people may be as likely to alienate as to persuade. Thus, there are no strong instrumental reasons to describe eating meat—or, for example, donating less than 1% of your income to charity—as morally impermissible. From a practical point of view, it is probably more effective to provide people with objective evidence that eating meat causes more harm than eating vegan, and that donating 1% of your income does more good than donating nothing—and then leave it up to individuals to decide how much effort they want to put into making the world a better place.
Conclusion
Ariela Keysar and Juhem Navarro-Rivera have estimated the proportion of atheists in the world at around 7%—an all-time high. It should therefore be no surprise that human cultures are permeated by superstitious thinking. Secular superstitions—such as a belief in natural rights, moral desert, free will or objective moral impermissibility—can be dangerous for the same reason that the belief in gods and immortal souls can be dangerous: they can all be used to justify harmful actions, norms and cultural practices.
As Joshua Greene puts it in his 2013 book, Moral Tribes: Emotion, Reason, and the Gap Between Us and Them:
Claims about what will or won’t promote the greater good, unlike claims about rights, are ultimately accountable to evidence. Whether or not a given policy will increase or decrease happiness is ultimately an empirical question. One can say that national health insurance will improve/destroy American healthcare, but if one is going to say this, and say it with confidence, one had better have some evidence.
And healthcare is just an example. Public policies affect people’s lives and the quality of their experiences. Resorting to unfalsifiable premises to justify one policy over another is nothing but a trick, used to avoid the difficult empirical process of using science to try to figure out which policy will have the best consequences. A truly secular and scientific moral philosophy shouldn’t be concerned with such distractions. It should only be concerned with things that can be empirically confirmed to exist, such as happiness and suffering.
Meanwhile why not check out the research made by the Heart Math Institute http://www.heartmath.org which has proven that human being are instantaneously inter-connected at the emotional level. The findings of which and much more too are discussed in the book by Joseph Chilton Pearce titled The Heart-Mind Matrix How the Heart Can Teach the Mind New Ways to Think. The collective human hive-mind that controls and patterns is in charge. This is a cause-and-effect world. Every one is having an effect on all others, and every one is suffering the effects of all others. EVERY one is in charge, as both cause and effect, moment to moment in space-time. Everyone transmits their thoughts and emotional states in all directions throughout all of space-time. It is all a seemless continuum. All beings are simultaneously effective. Everyone is thus responsible. We are each and all together a play upon the ultimate One.… Read more »
This entire paragraph necessitates free will, “Some fear that abandoning the belief in free will compels a kind of moral nihilism. They worry that, if nobody can rationally be blamed or credited for their actions, then everything must be permitted and there can be no grounds for punishing or rewarding anybody for any reason. But that worry is not well founded. As I (and many others) have argued, rewards and punishments influence people’s behaviour, and thus still have instrumental value.”
<I>Our wishes, thoughts and personalities are just as much outside our control as our genes are. […] As I (and many others) have argued, rewards and punishments influence people’s behaviour, and thus still have instrumental value.</i> And all those arguments are themselves merely automatic responses to conditions, so why should anyone care or assign value to them? (I mostly kid, but my position has long been that the <I>radical and unavoidable experience of free will</i> matters more than whether or not it “really exists”. We fundamentally <I>cannot</i> think or, at least for ourselves, <I>act</i> as if free will does not exist. That it’s illusory <I>does not matter in the slightest</i>, whether one is a filthy utilitarian* or not. (* I kid. A little.) (N.B. I’m also a programmer, and a lifelong atheist, and got a Philosophy degree. I agree that Natural Rights don’t exist as such, but that they’re also… Read more »
[…] Secular Superstitions: Why Unfalsifiable Narratives Are Harmful Published February 17, 2022By Donn DayCategorized as Philosophy […]
This is strange considering Dennett supports free will and ‘just desserts’. He would not regard them as superstitions and for good reason.
Even though, as Wobbly Jim argued in criticism of Ariel, the reduction of human suffering as the purpose of morality may be subjective, and thus allegedly contradicts Ariel’s essentially empiricist basic premise–still, as a CRITERION, it IS open to empirical verification. There is no empirical way to determine whether or not a given action obeys or violates the will of God, or conforms or fails to conform to natural law–e.g., one cannot for instance hold a Geiger counter or oscilloscope over the abdomen of a pregnant woman to determine whether or not her fetus has a soul and is therefore a person, and likewise such alleged empirical proofs of God’s existence as stigmata, weeping statues of Jesus or the Virgin Mary, levitating saints, or miraculous healings at Lourdes or by faith-healers, or the “dancing sun” reportedly seen over Fatima, are open to alternative, more naturalistic or secular interpretations (including outright… Read more »
“That said, morality need not be deemed entirely subjective, because philosophers can reasonably posit that the goal of morality is to minimise human suffering.”
The whole thing collapses here. That’s a subjective choice in and of itself. Some worthy goals in life require suffering. Furthermore, the goal of morality has no necessary connection with human suffering because the contribution that human goods make to this goal can neither be measured nor are they commensurable with one another. The window of what is considered suffering moves according to the experiences of the individual, hence 1st World problems (so called) apparently occupy attention in the minds of the comfortable in ways which those who live in less comfortable societies do not. All that “minimising suffering” (whatever that actually means) will do is create new opportunities to suffer.