Despite the current zeitgeist, in which many prefer to approach questions of fact through postmodernist analysis, other ways of knowing or lived experience—for many questions, there is no better source of understanding than well conducted empirical research. Of course, most of us cannot do that research ourselves: we depend upon scientists, university public relations departments and professional science associations to communicate research findings clearly and accurately. But, too often, they—consciously or unconsciously—exaggerate the strength of evidence or findings, or gloss over inconsistencies, and thus misinform the public. (I call this death by press release.) As a result, we should be somewhat cautious when we read about new research findings in the news.
To illustrate how this distortion can happen, I’d like to walk you through a particular study. I chose it because it’s pretty well designed, despite some flaws, and fairly representative, and because the flaws are very common in social and medical science studies. The study purports to examine the relationship between violent pornography and teen dating violence. The authors followed about 1,700 teens to find out whether viewing or reading about sexual violence predicted later teen dating violence. It’s an important topic, they used an appropriately large sample, and they had a decent design.
The problem begins with the study’s abstract—the first paragraph of any scientific research report, which briefly summarises the study and its context, including the state of past research on the topic. Most people who hear about a study read the abstract and not much else. So the abstract heavily influences what most people will think the research has shown.
What does the authors’ abstract suggest is the state of past research? They write, “Exposure to pornography in general has been linked with adolescent dating violence and sexual aggression, but less is known about exposure to violent pornography specifically.” As a summary of past research, this is inaccurate. Many past studies have found no link, and, recently, when my colleague Richard Hartley and I conducted a meta-analysis of porn research (a meta-analysis combines the data from all existing studies on a topic to look for patterns in the evidence), we found that the evidence could not link pornography use to sexual violence. Scholarly opinions can differ, but asserting as a fact something that purports to summarise the overall state of the research field, without any qualifiers or mention of past studies that have found no link, is misleading. So, not the best start.
And how do the authors describe their own study’s findings in their abstract?
Violent pornography exposure was associated with all types of TDV [teen dating violence], though patterns differed by gender. Boys exposed to violent pornography were 2–3 times more likely to report sexual TDV perpetration and victimization and physical TDV victimization, while girls exposed to violent pornography were over 1.5 times more likely to be perpetrate threatening TDV [sic] compared to their non-exposed counterparts. Comprehensive prevention strategies for TDV may consider the potential risks associated with exposure to violent pornography, particularly for boys, and provide an alternative source of education about healthy sexual behavior and relationships.
That sounds pretty conclusive. And, actually, pretty intuitive—it’s what the average person on the street probably expects to hear.
But here’s the thing to remember when you read something like that: study abstracts are often full of shit. For example, they may use a provocative term (like violent pornography) to describe what they are studying, because it gets people’s attention—rather than describing specifically what they actually measured—and that can be misleading. It turns out that these authors didn’t specifically measure exposure to what most people would call violent pornography—instead, they measured exposure to depictions or descriptions of sexual violence in any medium. (As they write later in their report, “Participants indicated the number of times they had ever consumed magazines, videos or films, or written books depicting a female or females being forced to engage in sexual acts.”) Sounds terrible, of course, but that description includes Law and Order: SVU, Shakespeare’s Titus Andronicus, and a whole host of other material that is generally considered gritty but respectable. Based on the instructions they were given, teens in the study may well have counted examples that most people would not consider pornography.
Another problem is that researchers tend to use abstracts to highlight details of their results that support their hypotheses, while failing to mention details that don’t. In politics, that’s called spin. In research, it might be called outcome reporting bias. Did that happen here? Let’s take a closer look.
The authors tested twelve different outcomes, but found that only four of those outcomes were statistically associated with prior exposure to depictions or descriptions of sexual violence. Those four are the only outcomes that are mentioned in the abstract. Failing to mention the eight outcomes where no statistically significant relationship was shown makes the results sound more conclusive than they are. Another unmentioned piece of information that would help general readers assess the results is that survey studies like this one tend to produce less reliable results, because there is no way to tell whether participants are reporting truthfully or remembering accurately.
Abstracts like these are par for the course in too many social science research reports. But the result is that people’s confidence in the consistency and clarity of research results tends to be much higher than the results warrant: in reality, social science research results tend to be inconsistent and murky. And our professional organisations are no help. For example, the American Psychological Association has a long history of exaggerating the strength of research findings to suggest that video games cause aggression (they do not), that spanking is associated with negative life outcomes (it is not clear that it is) and that certain traits associated with both masculinity and political conservatism are also associated with psychological disorders (there’s scant evidence of this). Add to this the historically liberal bent of social science researchers, and such organisations too often act like a combination of marketing firm and far-left pressure group.
Such communications are particularly likely to grossly distort public understanding when the research relates to emotionally or morally touchy topics, where nuanced or messy data may conflict with a prevailing social narrative. Pornography is one of those topics; other examples include race and policing, and whether the mentally ill are more likely to engage in violence (for some conditions they are, particularly if substance abuse is also involved).
Researchers engage in distorting communications because they are human, and because, historically, every incentive has pushed them in that direction. Messy, muddled results get fewer newspaper headlines, attract less excitement and grant funds, and win fewer accolades. Most of us enjoy being praised for being on message—and want to avoid the costs of producing data that’s off message—whether those costs are less attention, reduced funding, fewer professional honours or even getting cancelled. Accurately communicated results often don’t tell us the neat, orderly stories we human beings prefer to hear. Most of us would rather hear that violent porn harms kids than something like, We tried to measure violent porn, but maybe accidently included Shakespeare in that definition, and whatever it was we were actually measuring, it was associated to some extent, in some people, with a few bad outcomes, but was not related to other bad outcomes in the way you might have assumed it would be.
So why believe in science at all? Because science remains the only way of knowing that eventually self-corrects. Sure, sometimes research gets things completely wrong, and sometimes the self-correction takes longer than it should. But eventually, sceptical souls start prodding at received wisdom, and accurate data wins. Today’s new approaches, such as the open science movement, can speed up this self-correction process, helping scientists produce research that is both more rigorous and more transparent, making it easier to fact-check.
Meanwhile, general readers should be cautious and sceptical consumers of information about scientific research. The science that hits a newspaper headline isn’t always reliable, and professional groups have motives for not always telling us the unvarnished truth. We can be cautious and sceptical without indulging in the science denialism that has become prevalent on both the right and left. Science remains the best way for us to understand the world around us; we just have to remember that scientists, too, are human.
“Unlike traditional approaches to civil rights, which favor incrementalism and step-by-step progress, critical race theory calls into question the very foundations of the liberal order, including equality theory, legal reasoning, Enlightenment rationalism, and the neutral principles of constitutional law.” From Critical Race Theory: An Introduction, first edition (2001), by Richard Delgado and Jean Stefancic, p. 3. “Crits [Critical Race Theorists] are highly suspicious of another liberal mainstay, namely, rights.” From Critical Race Theory: An Introduction, first edition (2001), by Richard Delgado and Jean Stefancic, p. 23. Critical Race Theorists describe Critical Race Theory as a movement (which is strange for a theory of society) designed to reinvent the relationships between race, racism, and power in society. To do this, they begin with the assumption that race is socially constructed and racism is systemic. This means that they view racial categories as social and political fictions that have been imposed by… Read more »
Christopher Ferguson concedes in his opening paragraph that most of us cannot do empirical scientific research ourselves, so that we naturally and unavoidably have to “depend upon scientists, university public relations departments and professional science associations to communicate research findings clearly and accurately. ” However, Ferguson continues, “too often, they—consciously or unconsciously—exaggerate the strength of evidence or findings, or gloss over inconsistencies, and thus misinform the public,” which he calls “death by press release.” I always thought that all scientific researchers unavoidably have no choice but to be somewhat selective in what evidence, of the tons of phenomena, mete readings, etc., etc., bombarding them every day they must emphasize or else downplay or ignore. It’s always ultimately a matter of subjective personal judgment which data seem relevant and which appear less relevant, which ones ae important and which ones are unimportant. And that, ultimately, is unavoidably influenced and colored by… Read more »