Postmodernism has taken some heat in recent years. Anyone with the capacity to say I don’t know who has taken a close look at it since it first emerged from mid-twentieth century France has considered it, at best, very silly, and some of them have bothered to say as much publicly.
Folks across the political and intellectual spectrum have sought to lay all sorts of crimes at the feet of this odd brand of gibberish. Everything from the rise of Donald Trump to the erosion of democratic order has been linked to the scourge of postmodern pedantry. I myself have joined in the abuse.
Practitioners have found their egos pricked. Understandably. For a variety of reasons, they have dedicated their lives to a field that relishes in obfuscation and obscurantism.
Anthropologist Stuart Chambers thinks the critics have got it all wrong. His defense of postmodernism is a well written piece, which eschews the tortuous linguistic gymnastics widely recognized as emblematic of postmodern thinking. Instead, Chambers makes his case in clear, simple prose. It inspired me to interrogate my own position. In clear, simple prose, Chambers makes a solid case for the idea that a lot of the criticism has been hyperbolic.
However, Chambers fundamentally mistakes the nature of the scientific process and, ultimately, fails to advance a convincing case that postmodernism is useful. While Lyotard and Barthes might not be among the four horsemen of the apocalypse, they have produced a body of work that can—relative to the basic aim of questioning assumptions about the world and finding out what is true—most charitably be described as worthless.
Objectivity, Crackpots and Science
Is science objective? The simplistic view is that science is the uncorrupted pursuit of unassailable truth. Most scientists, however, have spent enough time immersed in the messy, hardscrabble world of scientific practice to recognize that such a view is, at best, a caricature and, at worst, leads non-scientists to fundamentally mistake the nature and scope of the scientific enterprise.
Science is first and foremost a cultural phenomenon. Individual scientists are riddled with biases and blind spots. Their views are subtly influenced by their sociopolitical contexts. The scope of their thought and investigation is substantially curtailed by the limits of extant knowledge and technology, some of which might well be faulty. But science is also a structured in a way that imbues it with peculiar properties. It can produce mistakes, but, to a degree unmatched by any other discipline, it is uniquely designed to identify those mistakes. Though imperfect, science is—by a considerable margin—the most powerful knowledge-gaining mechanism humans have ever devised.
Many of those with an ideological axe to grind against science like to point out the historical flaws in scientific thought and the failings of individual scientists. Chambers bases the core of his critique on this approach. For him, the history of scientific racism belies any claim to objectivity. Blinkered by the prevailing wisdom of their times, plenty of nineteenth and early twentieth-century scientists saw race as part of the natural order—with white people at the top of the racial hierarchy.
But scientific racism was abandoned as scientists learned more about the world. The assumptions that inspired racist thought were undermined by improved understandings of human genetics and culture. Race science wasn’t cast into the colossal dustbin of failed ideas because critical studies scholars told scientists that they should question their assumptions—it was repudiated because the scientific enterprise itself revealed those assumptions to be untrue.
The same is true of Chambers’ other examples. Conversion therapy was tossed out—alongside that whole slippery mess folks call psychoanalysis—when it was recognized to be a pseudoscientific monstrosity. It is true that coalitions of doctors opposed physician-assisted suicide in Canada. Note here that, outside of research institutions, doctors are not engaged in the business of discovering or refining ideas about how the world works and are not, therefore, scientists. But, even if they were, that some individuals disguise their theological preferences as scientific reasoning is nothing new. This doesn’t undermine any claim to disciplinary objectivity. It simply restates the obvious: individual scientists are human beings and, like other human beings, they are sometimes wrong.
Why Science Is Special
Criticism of this sort is superficial. Chambers does not seem to know what science is or how it operates. This is understandable—few people do.
Most of us have been taught about something called the scientific method. It goes something like this: do a bit of research, form a hypothesis, perform an experiment, then check whether or not the results of your experiment match what you’d predict from your hypothesis. Someone who knows only this understands the scientific process roughly as well as someone who knows the basic elements of story—character, setting, plot, conflict—understands literature.
Science can never be fully understood without first recognizing that it is a process. Individual scientists participate in that process. They study for years until they are ready to keep up with the process as it currently stands. Then they hop in and try to contribute to science, perhaps by speeding things up or pointing them in a subtly different direction. Rarely, they’ll completely reorder an entire paradigm, sending their discipline in a completely new direction. Sometimes, they’ll even retard the process by stubbornly refusing to relinquish harebrained ideas. Whatever their contribution, all scientists eventually get old and retire or die, but the process continues, always seeking new evidence and rooting out conceptual errors. It never arrives at a final destination. Absolute certainty is nowhere to be found.
This is true even in the most robust sciences. In a mature science like physics, all theory is boiled down to the rigorous language of mathematics. That means that, when it is time to experiment, expectations can be expressed with extreme precision. To a degree, this makes the relationship between prediction and observation axiomatic: if a physicist has done her mathematics and calibrated her instruments correctly, a result can say something definitive about the theory it’s designed to interrogate: a result that deviates substantially from predicted values tells researchers that something is amiss. With highly corroborated and heavily mathematized theories, tolerances can get really tight, with p-values in the thousandth—or even billionth—decimal place.
But, even with a theory like general relativity, whose predictions have been tested and confirmed countless times, scientists still don’t claim to have discovered the final, unimpeachable description of space and time. In fact, most physicists will probably tell you that, at very vast or very tiny scales, the physics of general relativity is useless, if not entirely wrong. General relativity is one of the most successful theoretical frameworks ever devised, but it is not true beyond all doubt. There’s more reason to be confident in general relativity than there is to be confident about any of the comforting assumptions we use to get through the day—my spouse loves me, my job is secure, I’ll see my family again in the evening—but, in the strictest sense, it is not true beyond doubt.
All this uncertainty amplifies as you alter your level of description from that of the world of massive objects and elementary particles. At other levels of resolution—where things become looser, more dynamic and complicated—our picture of the world gets a whole lot fuzzier. By the time you get to the science of human behavior, things start getting incredibly messy.
For social and behavioral scientists, the sort of precision available to hard science seems almost unattainable. Provisionality here isn’t just a tacit understanding or a way of reconciling the tools we’ve got with the universe we inhabit. When it comes to questions about what people do and why they do it, even our best explanations are provisional. Psychologists, anthropologists and sociologists accept that the best that they can do will probably only result in a loose approximation of the full truth.
They forge ahead knowing that even their failures will be instructive. Science is all about asking are you sure? As David Hull argues—in line with earlier thinking by Imre Lakatos, Paul Thagard and Thomas Kuhn—the whole thing unfolds in a pseudo-Darwinian process. Scientists are embedded in incentive structures that reward them for poking holes in one another’s thinking, even if they are too biased to spot fatal weaknesses in their own output. Scientists work toward a shared goal—to build or discover increasingly faithful descriptions of and explanations for the world we inhabit—within a complex environment of competition and cooperation.
What Is Postmodernism Good For?
Science is an imperfect and sometimes clumsy process. Entire disciplines can be temporarily misled or even completely derailed by cultural biases that individual researchers fail to spot. Fortunately, the process of scientific discovery prevents erroneous thinking from remaining hidden for long. Misguided schools of thought are always eventually identified and discarded. Not because cultural critics tell scientists they are on the wrong track, but because the process of scientific discovery is remarkably adept at continually asking itself are you sure? and rooting out rotten ideas.
Nothing like this can take place in the world of postmodernism. Because postmodernists reject the idea of a final arbiter, such as a universally accessible and ultimately knowable reality against which claims can be evaluated, when postmodernists ask themselves are you sure? they can never really know the answer. Postmodernism lacks an anchor point for error correction. As a result, it is fundamentally illegitimate—and ultimately pointless—as an intellectual enterprise. It is, at best, a self-perpetuating indulgence, ostensibly doing the work scientific disciplines naturally do on their own.
But, while good scientists always strive to make their work both useful and intelligible to a world beyond their native disciplines, postmodernists and critical studies enthusiasts are perpetually engaged in a convoluted, circular performance for the like-minded. They play what appears to be an esoteric and sophisticated game, which, on closer inspection, turns out to be hide and seek—with a rulebook written in High Elvish and riddled with redundancies, contradictions and extraneous passages.
In repudiating the notion that truths are fixed for eternity, postmodernism denies itself the capacity to course correct. Its students may pay lip service to the crude notion that some truths are better than others. But, without recourse to something universal and knowable outside their individual and social contexts, they’ll never be able to tell which truths those are.
This is true even for conditional facts and the explanations they feed. Many realities are highly conditional. The idea of human rights, for example, is a relatively recent invention, which emerged in a specific cultural and historical context. That doesn’t mean that the value and consequences of the idea aren’t open to careful empirical evaluation, even if the idea itself might morph and expand over time. However, without recourse to something universal and knowable—something that can be accessed, shared and understood, despite the influence of cultural context and individual personality—such a reckoning becomes impossible.
Postmodernism has yet to state any epistemological footing or methodological toolkit that gives it special license as a way of asking, are you sure? There’s nothing to suggest it is a useful approach. Scores upon scores of essays have advanced the field no further than the principles articulated by René Descartes 380 years ago, in the opening chunk of his Meditations on First Philosophy. Postmodernism has nothing to offer that improves upon reflection, reason and dialogue—tools fully accessible to everyone, even if they’ve never cracked open a work by Foucault or Derrida or occupied a university position. Identifying what postmodernism is good for is incredibly tricky. From the outside, that entire school of thinkers looks like a bunch of naked emperors arguing over who has the best outfit.
Ways of Knowing Versus Ways of Confusing
Science recognizes that our understanding of truth is conditional, approximate and uncertain, but that it rests on the fundamental, commonsensical and thoroughly parsimonious contention that truths exist and that we can, with practice, use the data provided by our senses to tell when we’ve got a little closer to them. Biological evolution is not a conditional truth. Our conception of it may grow and change as we come to understand more about how it works, but the process itself is a non-negotiable fact of the natural universe. The same is true of fundamental physics. F=MA will remain a valid description of the way material objects behave, regardless of how society shifts and changes.
What we need to ask ourselves is not whether science is a perfectly objective machine for generating irrefutable truths. It isn’t. No scientist worthy of the name would argue otherwise. The real questions are: is science more objective than other methods?; is it better at describing and explaining observations than other methods?; and does it get closer to objective truth than other methods? The answer to all of the above is a resounding yes!
There’s no room for serious debate on this point. When we seek to treat or cure a disease, do we turn to the Pope? When we want to put a satellite into orbit or a human on the moon, do we consult critical studies professors? When we want to investigate the high energy world of elementary particles, do we pour over the wisdom of ancient mystics? When we want to develop alternatives to burning hydrocarbons for energy—or get a sense of what will happen if we don’t—do we give funding to postmodern scholars?
Individual scientists are not perfectly objective. They apply an often ad hoc, continuously evolving assemblage of imperfect methods to their pursuit of truth. But they are constantly asking themselves and each other, are you sure?—and the fundamental structure of the scientific process makes them better at both asking and answering that question than anyone else. Their enterprise demands humility before the unknown as its basic entry requirement. Scientists work at the frontiers of knowledge, knowing full well they may never make any headway.
As Richard Feynman put it, “science is the belief in the ignorance of experts … the first principle is not to fool yourself—and you are the easiest person to fool.” Science is about setting a community of very smart people the task of figuring out what is true and holding their feet to the fire. This last task is easier said than done, but it ultimately boils down to comparing scientific ideas to the observable world and asking whether or not they hold up. With science, we take a good look at how we think the world works and ask ourselves, are you sure?
The history of science has been one long series of debunking assumptions. We take many scientific truths for granted today, but very few of science’s robust explanations are intuitively obvious or self-evident. The earth is 4.6 billion years old. The landmasses we inhabit are cosmically ephemeral and constantly moving. All known life is derived from a single common ancestor, which lived more than 3.5 billion years ago. Time is not a physical constant, but speeds up or slows down according to the relative motion of observers. The universe is almost unfathomably ancient, and everything in it was once compressed into a single dense, hot singularity. Everything—including us—is made up of a modest handful of elementary fields and particles. The behavior of those particles is probabilistic. Objects we interact with on a daily basis are, like us, mostly empty space. All scientific progress has come from asking ourselves are you sure? and bravely admitting that no, we are not.
Science is not a perfect method for finding the truth. But it is the only method we have for getting anywhere close to it. Postmodernism, on the other hand, is just a way of getting paid to churn out badly written academic essays that no one who doesn’t also call herself a postmodernist will ever read.