Science is all the rage at the moment. The Covid-19 pandemic has given us a unique window into how scientists carry out research into the basic mechanisms of disease; how scientific data are translated (with varying degrees of success) into public health guidelines; and the rapid deployment of science and industry to develop lifesaving treatments and vaccines.
Believe science has become a comforting social media slogan amid the past year’s chaos, but this platitude has an undertone that runs contrary to the true spirit of scientific inquiry. All too often believe science means obey authority and is used as a way to shut down debate. Science is simply assumed to reign supreme.
Have our educators failed us by telling a simplistic story about a complex endeavor? The history and philosophy of science continue to be rarely taught to scientists, even at the graduate level, even though the philosophy that undergirds the scientific enterprise is truly illuminating, and could help both scientists and the general public better grasp the discipline.
What Is the Scientific Method?
As science popularizers typically describe it, the scientific method involves observing the world around us, formulating hypotheses to explain our observations and testing those hypotheses by experiment. But this simple formula has a long and complex history.
The beginnings of this concept can be traced to Aristotle in the fourth century BCE. Unlike his mentor Plato, Aristotle believed that the universal principles, or forms, of nature were best understood through a careful investigation of the natural world. He was the first thinker to write extensively about biology, and his students’ many sketches of local animal life inspired his theories of animal classification and development in On the Generation of Animals. In his Physics, Aristotle outlines his four-part theory of causation. Take a wooden table, for example: its material cause is the wood of which it is composed, its efficient cause the carpenter who crafted it, its formal cause is the particular shape that makes it a table rather than something else and its final cause the purpose for which it was created.
Despite his passion for empirical observation, Aristotle was committed to deductive logic as the ultimate means of acquiring knowledge. By flawlessly reasoning from universally accepted premises to conclusions, we can guarantee certainty, he argues. But Aristotle’s method is not equipped to investigate the fundamental premises themselves.
It would be many centuries before Francis Bacon laid down a firmer grounding for the scientific method. New technical instruments had begun to give people the ability to peer deeper and farther into nature and Bacon was eager to explore the wealth of experimental evidence they could provide. Finding Aristotle’s syllogistic method inadequate to describe the complexity of the natural world, Bacon undertook his “great renewal,” a grand project to endow the sciences with a new, more rigorous methodology. Bacon’s 1620 work the New Organon, which was designed as an overhaul of Aristotle’s eponymous text on logic, is widely considered to provide the first systematic description of a scientific method. Bacon writes:
There are, and can be, only two ways to investigate and discover truth. The one leaps from sense and particulars to the most general axioms, and from these principles and their settled truth, determines and discovers intermediate axioms; this is the current way. The other elicits axioms from sense and particulars, rising in a gradual and unbroken ascent to arrive at last at the most general axioms; this is the true way, but it has not been tried.
Bacon’s proposed method relies on meticulously curated observations, organized into an elaborate system of tables that would make a modern scientist blush with envy. In one “Table of Presence,” Bacon lists all the phenomena associated with heat, for example, including “the sun’s rays” and “lightning that sets fires,” and then compares them to a “Table of Absence”—closely related phenomena that do not produce heat—containing items such as “the moon’s rays” and “sheet lightning which gives light but does not burn.” A third table correlates how these phenomena increase or decrease with changes in certain other properties. By discarding the redundancies among the various tables, Bacon reasons, the causal principle underlying each phenomenon can be uncovered.
About a century later, David Hume famously threw a wrench into this project. In A Treatise of Human Nature, he notes that, every time we draw an inductive inference, our chain of reasoning conceals the unstated, unproven premise that nature is uniform across space and time: “If Reason determin’d us, it would proceed upon that principle that instances, of which we have had no experience, must resemble those, of which we have had experience, and that the course of nature continues always uniformly the same.”
Hume’s “problem of induction” dogs philosophers to this day. It doesn’t much bother scientists, however. Induction is the firm foundation of the scientific method and researchers carry on making empirical observations and inductive inferences unperturbed by philosophical dilemmas.
In addition to laying out a new method of inductive logic, Bacon warns about the various prejudices—“Idols”— that can impede our ability to obtain reliable, objective knowledge of the world. The “Idols of the Tribe” are those distortions inherent to human consciousness and sensory perception, and the “Idols of the Cave” are those particularities of family, friendships, culture and geography that shape how we perceive the world. The “Idols of the Marketplace” and “Idols of the Theater” are, respectively, those influences that originate in language and in the prevailing dogmas and philosophies of the day. Bacon was a truly prescient thinker. His theory of “Idols” foreshadows the modern concepts of cognitive bias and the other types of systematic error that scientists attempt to banish from their work.
Over the next few centuries, natural philosophy, which encompassed all scientific endeavor, was gradually transformed into the highly technical and specialized disciplines we recognize today. The twentieth century brought perhaps the most dramatic changes, as government and industry began to invest heavily in scientific research, having recognized its value to public health, technology, national security and prestige.
At the same time, there was heated debate as to how science itself makes progress. Karl Popper believed that the defining quality of a scientific theory is its ability to be falsified: a solid theory should contain hypotheses that can be decisively ruled out by experiment. It is primarily by falsifying erroneous theories, not by accumulating supporting evidence for true ones, that science makes progress. Overall, scientists tend to agree that falsifiability is an important criterion, which is why string theory—considered by some to be more metaphysics than science—has generated so much controversy.
Thomas Kuhn’s theory of how science advances is more radical. Instead of obeying a linear process of verification and falsification, he thought that science as a whole often advanced in unpredictable jumps. In the Structure of Scientific Revolutions, Kuhn writes that scientists often work productively on the basis of a given paradigm or foundational research model for long stretches of time. Then there is a trickle of contradictory and/or puzzling experimental results. Eventually, these problematic findings grow so numerous that they provoke a crisis in the discipline, which is resolved when a new paradigm is adopted. Competing scientific paradigms are often incommensurable, meaning that their methodologies and languages cannot be directly compared. But, although there are no permanent, objective rules that govern the choice between competing paradigms, Kuhn did not believe that the choice was arbitrary. A scientific paradigm should be simple, internally coherent, compatible with other accepted theories and able to generate new avenues of research and explain a wide range of phenomena.
One such paradigm shift was ushered in with Charles Darwin’s publication of On the Origin of Species in 1859. Darwin’s theory of descent with modification from a common ancestor, subject to the mechanisms of natural selection, explains the diversity of life in terms of purely physical principles. At a time when the only other explanatory game in town was divine creation, the evolutionary paradigm placed humanity squarely within, rather than above, nature. The story of our place in the universe was completely overturned. The significance of this has been aptly summarized by evolutionary biologist Theodosius Dobzhansky: “Nothing in biology makes sense except in the light of evolution.”
Trouble in Paradise?
As Kuhn points out, most day-to-day science consists of the modest, incremental progress of thousands of scientists, woven into a practical canon of knowledge. Researchers working anywhere in the world should be able to review the published findings of their peers and build upon them: the ability to reproduce experimental results is a cornerstone of the scientific method. Recently, however, there has been growing alarm over a replication crisis, particularly in the biomedical and social sciences.
One high-profile example came to light when pharmaceutical firm Amgen attempted to reproduce the findings of 53 landmark papers in cancer biology—and failed to do so for all but six. This raises serious questions. Are resources being invested in technology and therapeutics on the basis of flawed data? Is this problem endemic to just one corner of scientific research or is it more widespread? A few factors may be playing a role here: flawed research methodology and abuse of statistics, such as p-hacking; the need to publish or perish, which invariably leads to rushed and shoddy research; the gradual drift toward a Big Science in which the majority of scientific R&D is funded by corporations, instead of government. Science is a human institution, limited by the fallibility of its individual human practitioners.
Passionate debate over the proper limits and role of science in our societies will surely continue. But a careful study of the history and philosophy of science could have a salutary effect on the discipline. It would help us better communicate science to the public, suggest fruitful paths forward as we learn from the challenges of the past, and bring this venerable institution back down to earth where it belongs.