Large swaths of the American public have lost faith in experts. The Covid-19 pandemic has highlighted this fact: despite the expert consensus that vaccinations against Covid-19 are safe and effective, 30% of the population remains reluctant to be vaccinated. And, before the rollout of vaccines, survey data indicated that 15% of the population did not wear masks regularly, contrary to expert advice.
The American crisis of faith in the legitimacy of experts is neither recent nor limited to pandemic-related topics: it extends to numerous public policy issues. For example, 97% of scientists agree that climate change exists and is caused by humans, but only 57% of Americans accept this. And 97% of scientists agree that humans evolved over time while 30% of Americans profess belief in creationism.
This distrust of experts, like many issues in America today, is influenced by partisan polarisation. In a 2019 poll, 43% of Democrats stated that they have “a great deal” of confidence in scientists, while only 27% of Republicans echoed this sentiment. Epidemiologists initially worried that a lower percentage of black, indigenous and Hispanic people would trust the vaccine—only to discover that these groups’ vaccine confidence rates were similar to the white population’s or even higher (and that most of those who are unvaccinated are limited by access). Instead, the group with the highest rate of reluctance was Republicans.
Many left-wing Americans feel a sense of superiority over those who distrust experts, and some Democratic commentators routinely suggest that Republicans are scientifically illiterate, irrational or simply unintelligent. For example, CNN commentator John Avlon, reflecting on Republican vaccine hesitancy, recently quipped, “There’s no cure for stupid.” Alternatively, such commentators often claim that Republicans’ distrust of experts is caused by politicians and wealthy donors who exploit their base’s gullibility by spreading misinformation to serve their own political or financial interests. For example, in a recent opinion piece purporting to explain why so many Republicans mistrust scientific expertise, historian Naomi Oreskes writes:
Why do so many Republicans distrust government, including government science, and think scientists are “always getting it wrong”? A large part of the answer is that this is what the party’s spokespeople have been saying for 40 years, from the early days of acid rain to our ongoing debates about climate change.
While it’s important to consider the role of party elites’ messaging, Oreskes’ causal claim is difficult to establish: is the Republican base parroting deceptive leaders, or are party leaders reflecting constituents’ beliefs? Research on this question has yielded mixed results. And even if Oreskes’ view is correct, it doesn’t explain what caused these Republicans to transfer their trust from credentialed experts to political leaders.
At least with respect to the issue of climate change, distrust of scientific experts is not related to education level, but to which social group one identifies with. A 2019 Pew Research poll found that Republicans and Democrats are about equally informed on scientific knowledge, which suggests that Democratic claims of Republican ignorance are unfounded. And a 2011 study found that individuals with greater scientific and numerical literacy were less likely to believe that climate change is a serious threat. Furthermore, increased education was associated with increased social polarisation: the more scientifically literate people are, the more sceptical those inclined to dismiss climate change are, and the more fearful those who fear climate change are. This suggests that, when it comes to politically contentious issues like climate change, educated individuals may be more influenced by identification with their social group than by a careful analysis of the evidence.
Accepting the dominant beliefs in your particular social group is not irrational. We all rely for much of our knowledge of the world on simply accepting what others we trust have told us. If you doubt this, try a thought experiment: if you were forced to debate a flat-earther who was armed with graphs and complicated theories, how would you defend your position? Most of us don’t know how to perform experiments that would directly prove that the earth is round. Instead, we rely on what we are told by the experts we trust—whether they be our parents, our teachers or NASA scientists.
To compensate for the limits on our individual ability to gather facts about the external world, humans invented the social technology of expertise, which enabled each of us to gather some kinds of knowledge while relying on others to gather other knowledge—and we invented the scientific method to make knowledge gathering more reliable.
This arrangement requires trust to work—trust that experts will conduct their research without bias, for the sake of truth-seeking, and will report their results accurately. When we worry that these conditions have not been met or when experts disagree, or when the issue is important enough, ideally we examine the evidence—or the experts’ different explanations—ourselves. Historically, many great scientific breakthroughs have hinged on shifts in thinking that challenged the scientific consensus.
This process worked better in the past. The public had more trust that scientists would conduct research well and that the media would communicate scientific findings honestly and accurately. And since Americans had many fewer sources of information before the internet age, they had little choice but to trust those sources.
With the advent of the internet, however, information sources increased exponentially. Suddenly, individuals no longer needed to believe talking heads on TV; they could access conspiracy theories, amateurs who were challenging the expert consensus (sometimes correctly), and alternative facts. When there is no scientific consensus on an issue, more individuals turn to these new sources. As a result, the idea of truth has become fragmented and archipelagic. As Chris Hayes writes in Twilight of the Elites,
When our most central institutions are no longer trusted, we each take refuge in smaller, balkanized epistemic encampments, aided by the unprecedented information technology at our disposal. As some of these encampments build higher and higher fences, walling themselves off from science and empiricism, we approach a terrifying prospect: a society that may no longer be capable of reaching the kind of basic agreement necessary for social progress … At exactly the moment we most need solid ground beneath our feet, we find ourselves adrift, transported into a sinister, bewildering dreamscape, in which the simple act of orienting ourselves is impossible.
Once we had more information and more transparency, we also glimpsed the failures of modern science. In the early 2010s, a number of researchers drew attention to widespread methodological errors in the way scientific research was being conducted and communicated, and to what became known as the replication crisis. Efforts were made to replicate past studies, particularly in the social sciences, and in hundreds of cases—including many whose results had been widely disseminated and popularized—the same results were not obtained. People also became more aware that some common practices were producing dubious results. For example, scientists often did not publish findings that failed to confirm their hypotheses, used sample sizes that were too small, or employed doubtful statistical methods to wring publishable results out of their data. John P. A. Ioannidis’ seminal 2005 paper, outlining many of these problems, concluded that, in many contexts, it is more likely that research findings will be false than that they will be true. Researchers’ (often homogenous) identitarian and ideological backgrounds can also systematically bias results.
Many of these methodological problems have little impact on ordinary people’s lives. But even the information we get from experts we interact with regularly can be inaccurate and subject to perverse incentives. Research suggests that some physicians prescribe drugs that are not medically necessary, because they have received gifts from pharmaceutical companies (a practice some would call bribery). An exposé in The Atlantic revealed that, in rare cases, dentists have intentionally overtreated patients, sometimes permanently disfiguring them, in order to be able to charge them for the resulting corrective treatments. Although most dentists practice ethically, many common and widespread dental practices are unnecessary or less safe than is suggested. And historical examples of medical experts taking advantage of helpless patients abound, including the infamous Tuskegee syphilis experiment.
In recent years, we’ve also occasionally seen scientists give inaccurate information to the public on matters of the utmost consequence—such as the pandemic. On 8 March 2020, Dr Anthony Fauci stated that “people should not be walking around with masks.” By April, the official CDC guidelines had flipped, and now recommended masks. If Fauci had simply not known whether masks were effective, this error might be understandable—the pandemic was an evolving situation, rife with uncertainty. However, his later words suggest that he may instead have been motivated by a desire to save scarce masks for at-risk healthcare workers, rather than by a conviction that masks were ineffective in preventing the spread of disease:
I don’t regret anything I said then because in the context of the time in which I said it, it was correct. We were told in our task force meetings that we have a serious problem with the lack of PPEs and masks for the health providers who are putting themselves in harm’s way every day to take care of sick people.
At first, there were not enough masks to meet the needs of healthcare workers and top epidemiologists predicted—wrongly—that if they were recommended for all, factories would fail to boost production in time to meet demand. But it is outside epidemiologists’ area of expertise to predict how efficient factories will be, or how people will react if they are told frankly that masks are effective, but that individuals should delay purchasing them to aid healthcare workers. Trust is a two-way street: if experts cannot trust the public with accurate information, they cannot expect the public to trust them—and this may be one reason why many people have refused to wear masks.
Experts may also too quickly dismiss explanations that don’t fit their previous beliefs, even when such explanations are widely believed by members of the public. For example, experts initially assumed that Covid-19 had been transmitted from an animal to a human. When some—including prominent figures like Senator Tom Cotton—began suggesting that the disease may have instead originated from a lab leak at the Wuhan Institute of Virology, most experts dismissed the lab leak theory as a right-wing conspiracy. Later, evidence emerged that this theory was plausible. Perhaps many experts who dismissed the lab-leak theory did so because it was promulgated by people they didn’t trust, such as Donald Trump. But good science involves considering every explanation, including those that run contrary to your political instincts. A June 2021 survey found that 76% of Republicans believed in the lab leak theory. No doubt, given the emerging evidence, this group felt vindicated—and their already low trust in experts probably diminished further.
The trust we place in experts to provide facts about our external world is a necessity if we want to avoid living in a constant state of uncertainty. But when science appears broken, subject to bias, fraud and the influence of special interests—or when some of its practitioners are revealed as duplicitous, self-serving or cynical—this trust can collapse under the weight of cumulative betrayals.
Most Americans have now chosen one of two paths. Most Democrats reflexively trust scientific expertise, while most Republicans reflexively distrust it. This alone might explain many differences in behaviour between the two groups: for example, why most Democrats wear masks and most Republicans don’t, why more Democrats than Republicans recycle, and so on. Some of these differences may be related to educational and geographical differences: both Democrats and Republicans are likely to marry and become friends with people who share their beliefs. Democrats and people with scientific expertise are more likely to have attended college, and to live in urban areas with a higher concentration of highly educated people, which means that many Democrats have more in common with experts than many Republicans do, perhaps making them easier for Democrats to trust.
In one respect, the two sides are quite similar, though. Most people are dogmatic rather than open-minded and rational—and differ only in the content of their beliefs. For many Democrats, those beliefs include trust the science and follow the experts; for many Republicans, those beliefs include a reflexive distrust of experts and of the mainstream media.
Instead of judging Republicans negatively for being sceptical, Democrats should empathize with them. It’s not stupid or crazy to distrust scientific findings, particularly when scientists themselves have acknowledged that many studies have errors or yield inaccurate results. Democrats, for their part, should respect science, but without idolising scientists, and should keep in mind that experts can be wrong. This is not an anti-science view; scientists themselves agree that this is how science should work: we should subject every scientific claim to rigorous inquiry and efforts to falsify it, even when most scientists consider it to be correct.
Republicans should trust that science can yield accurate results when done correctly, especially when many different kinds of evidence reliably converge on the same findings over long periods of time. Some findings are the result of decades of diverse, well-documented and methodologically sound research. The studies supporting them have large sample sizes and qualified research teams (reducing the chances that results will be due to bias or distorted by special interests). Meta-analyses evaluate collections of such studies. Such findings may still be wrong, of course, but that is highly improbable, given the weight of the evidence. Climate change is probably real, and vaccines are probably safe and effective. I believe this, not because I instinctively trust what experts tell me, but because the evidence is compelling.
To reduce polarisation, it might be helpful to think about scientific findings from the perspective of Bayesian epistemology. The general public tends to evaluate scientific opinions in a binary fashion—either evolution is real or it isn’t; either COVID-19 is a hoax or it isn’t. Pollsters add some nuance to these judgments, for example, by distinguishing between strongly agree and somewhat agree. But Bayesian epistemology takes this further by focusing on the probability that a particular fact is correct rather than a belief that it is correct. With every piece of new evidence related to a fact, we are advised to update our estimate of how probable it is that the fact is true. For instance, instead of merely stating that I believe nuclear power is safe, I might decide that there’s a 90% chance that this claim is correct. An opponent of nuclear power might start out by deciding that there’s a 10% chance the claim is true. However, after we each read an article that demonstrates that nuclear power plants operate according to strict safety standards, we should both increase our estimate of the probability that the claim is true (using Bayes’ rule). The beauty of Bayesian updating is that the original probability estimates don’t matter in the long run: two people can start out with almost completely opposite views, but by updating their estimates based on new evidence, their views will eventually converge.
Of course, most people are not going to use a mathematical approach to evaluate claims about the world. But if everyone became more familiar with Bayesian epistemology and probabilistic judgments, it would greatly aid our public discourse about science. Democrats and Republicans might begin by holding completely different beliefs and might assign different weights to new evidence, but as long as they incorporate new information and gradually update their estimates, the two groups’ views can eventually become closer. Instead of implicitly assigning probabilities of 100% and 0% to the claims that are made in public discourse—and engaging in arguments that lead nowhere—we could describe our beliefs in gradated, probabilistic terms, and have more nuanced and interesting debates. We wouldn’t need to embrace a false dichotomy, choosing a side either for or against experts: instead, we could assign different weights to different claims, depending on the degree of consensus among experts and the strength of the evidence.
We can also operationalize this approach by betting money against those who disagree with the predictions our beliefs generate. Last year, the platform PredictIt offered many people a chance to bet on the accuracy of QAnon conspiracy theory. Even after Joe Biden officially won the election, PredictIt still gave Donald Trump a 15% chance of victory (because some supporters bet that the election results would be overturned). Of course, it’s disturbing when people lose money by betting on the accuracy of obviously false conspiracy theories. However, such platforms also offer the hope of reducing polarised reactions to scientific claims.
It is usually fruitless to try to persuade conspiracy theorists to abandon their beliefs: we are all subject to cognitive biases that may prevent us from changing our minds, even in the face of contrary evidence. By contrast, once you commit financially to a prediction, there is no easy escape—when it turns out that your prediction was wrong, you have to pay up. Repeatedly losing such bets may encourage conspiracy theorists to consider that they might be wrong. We cannot turn every claim into a quantifiable prediction, but, for example, we could bet that average global temperatures will rise a specified amount by a specified date against a climate change sceptic who has bet that they won’t; or bet against an anti-vaxxer that autism cases will not increase with increases in childhood vaccinations.
There are signs that scientists are improving their practices to make them more reliable, but it is understandable that many continue to distrust experts on a host of issues. We should feel empathy, not contempt, for those who see themselves as betrayed by experts. We should try to think probabilistically when evaluating scientific claims, instead of engaging in dichotomous thinking. And we should remember that there is always a chance that we could be wrong.