I learned a hard lesson last year, after blowing the whistle on my coauthor, mentor and friend: not all universities can be trusted to investigate accusations of fraud, or even to follow their own misconduct policies. Then I found out how widespread the problem is: experts have been sounding the alarm for over thirty years.
One in fifty scientists fakes research by fabricating or falsifying data. They make off with government grant money, which they share with their universities, and their made-up findings guide medical practice, public policy and ordinary people’s decisions about things like whether or not to vaccinate their children. The fraudulent science we know about has caused thousands of deaths and wasted millions in taxpayer dollars. That is only scratching the surface, however—because most fraudsters are never caught. As Ivan Oransky notes in Gaming the Metrics, “the most common outcome for those who commit fraud is: a long career.”
There are two reasons for this. First, many scientists who witness fraud don’t report it, because they believe nothing would happen if they did and they fear retaliation. Second, when fraud is reported, the job of investigating it falls to the fraudsters’ universities. Most whistleblowers inform their universities directly. Even if they don’t, federal agencies, like the National Science Foundation and the National Institutes of Health, refer fraud accusations back to universities for investigation, and publishers and the Committee on Publication Ethics tell journal editors to do the same.
When someone blows the whistle, universities have an opportunity during the initial inquiry—in which they decide if there’s enough evidence to justify a full investigation—to catch and kill the accusations. If universities open an investigation, they must notify relevant federal agencies and share their later findings, but if they decide not to investigate, the case closes. Closing cases without full investigations is an effective way to prevent government involvement and to avoid having to return fraudulently obtained taxpayer money, because federal agencies rarely reopen closed cases. Often, they do not find out about them at all.
Universities can make a lot of money from sham science. They lose money from catching fraudsters. Uncovering fraud also brings negative publicity and a host of other headaches, such as potential lawsuits for defamation and wrongful termination. Even in biomedical cases, where the public health consequences of fake research are most severe, universities dismiss almost 90% of fraud accusations without an investigation, or even an auditable record.
Claims that universities cover up fraud and even retaliate against whistleblowers are common. Last year, Duke University settled a whistleblower lawsuit that alleged it had wrongfully obtained and wasted “over a hundred million dollars of taxpayer funds” through massive biomedical research fraud and “institutional malfeasance.” The same year, Kristy Meadows sued her alma mater, Tufts University, alleging that, after she accused her adviser, Elizabeth Byrnes, of faking an experiment, the university violated its misconduct policies, mishandled her fraud accusations and failed to protect her from retaliation by faculty and staff. According to the lawsuit, the retaliation against Meadows included, among other things, discontinuing her stipend, delaying her graduation and falsely accusing her of theft multiple times. When Meadows asked her adviser about faking the data, Byrnes reportedly said it was fine, “because, if they had done the experiment, this data reflected the result they would have gotten.” Tufts University cleared Byrnes of wrongdoing and promoted her.
Terri King filed a similar lawsuit against the University of Texas Health Science Center at Houston, claiming that it demoted and then fired her in retaliation after she reported her supervisor, Dianna Milewicz, for faking research on cardiovascular disease. The university “failed to fully investigate and fraudulently covered up research misconduct by Milewicz in an effort to allow her and other researchers full access to federal grants,” the lawsuit alleged. Unlike Duke and Tufts, the University of Texas is a state agency. King’s lawsuit was dismissed on the grounds that, as an “arm of the state,” the university is exempt from liability. Milewicz is now the university’s chair of cardiovascular medicine and is studying a leading cause of childhood death, using a million-dollar award from the American Heart Association.
Public records obtained during my recent whistleblowing experience shed light on the inner workings of how universities can silence fraud accusations. I sent evidence to Florida State University (FSU) suggesting that one of its professors, Eric Stewart, may have faked five studies and defrauded the government of grant money. FSU did an initial inquiry but decided not to conduct a full investigation. Stewart later retracted all five studies, including two he had supposedly corrected earlier. Like many universities, FSU has strong policies for handling fraud accusations, some copied directly from government websites. These written policies are a source of credibility; strong policies suggest any cleared faculty must be innocent. But internal documents reveal that FSU’s policies were just window dressing. When the university received fraud accusations, it treated the policies as optional, ignoring those that would ensure a thorough and unbiased inquiry.
FSU started by ignoring its evidence-collection policy: “Sequestration of research records should take place concurrent with or prior to notification.” This policy is important, according to FSU, because “prompt and complete sequestration of physical evidence … is vital for resolving misconduct allegations.” Despite this policy, FSU never sequestered Stewart’s raw data—not when it notified him of initial accusations from a pseudonymous source (“John Smith”), not when it notified him of new accusations by one of his coauthors (me), and not when administrators learned that Stewart was destroying his data against their explicit directive: “Please just do not alter the original data set in any way.” FSU didn’t even sequester Stewart’s original output (tables of results). In fact, over a month after FSU notified Stewart, the inquiry committee had to ask administrators to get “copies of the original log or output files,” documents it should have had from the start. And even after sending this email, the committee still never got the original output, much less the original data.
Consequently, the committee lacked complete information, and said so itself repeatedly. “Committee did not have access to output from original analyses, so we could not determine whether the original standard errors lacked ending zeros,” the committee wrote in its final report about one of the main statistical irregularities in Stewart’s studies. It also explained that, while Stewart described his data files, he “did not share with the committee how he created these files.” Just days before finalizing its report, the committee emailed an administrator to ask: “in lieu of making a recommendation, do we have the option of indicating that we did not have sufficient information to make one?” If FSU had followed its own evidence-collection policy, the committee members would have had Stewart’s raw data and could have examined it themselves.
FSU also ignored its conflict-of-interest policy, which lists “potential conflicts such as collaborations, co-authorships, financial conflicts, etc.,” and states that the university should “ensure that no person with such a conflict is involved in the research misconduct proceeding.” Despite this policy, two of the three inquiry committee members, William Bales and Sonja Siennick, were Stewart’s coauthors. Each had written multiple articles with Stewart, and both had worked with him for many years in FSU’s College of Criminology & Criminal Justice. Besides collaborating directly with Stewart, Bales and Siennick had also coauthored over 25 journal articles and book chapters with other members of Stewart’s research team, all of whom were authors on the five studies the inquiry committee was evaluating.
The committee’s close connections to Stewart and his research team may explain its apparent reluctance to do anything that might trigger a full investigation. When the committee requested more documents from administrators, for example, it asked repeatedly about triggering a full investigation: “do these requests for additional information … constitute an investigation, or can they be considered part of our inquiry?”; “does insufficient information to determine misconduct require that we recommend an investigation?”
Its strong ties to Stewart could be why the committee toned down its report before releasing it. A draft report included a section titled “Mischaracterization of data,” and described what sounds like falsification in two studies: “Dr. Stewart revealed that he had misrepresented the data and methods in the 2011 (Johnson et al.) and 2015 (Stewart et al.) papers.” However, the final report included softer language and applied it to only one paper: “Dr. Stewart incorrectly described the data used in the 2011 paper on which Dr. Pickett was a coauthor.” The committee knew the 2015 paper included the same incorrect descriptions, but changed its report anyway.
Conflicts of interest may also be at fault for one of the committee’s most surprising decisions. Stewart admitted he misrepresented a 2013 survey in three articles. The articles reported a 2013 survey of 2,736 of Americans, but Stewart told the committee the 2013 survey really only had 1,079 respondents. He said he combined them with 1,432 respondents interviewed years earlier, in 2007 and 2008. Even those numbers don’t add up, however. They total to 2,511 respondents, which is 225 fewer than reported in the articles. The committee decided to look the other way, omitting from its final report any mention of Stewart’s misrepresentation of the 2013 survey or of the hundreds of missing respondents.
FSU also ignored its key fact-finding policy: “The inquiry committee and the RIO [Research Integrity Office] must … interview each Respondent [Stewart], Complainant [John Smith, me], and any other available person who has been reasonably identified as having information regarding any relevant aspects of the inquiry [Marc Gertz, Jake Bratton].” Despite this policy, the committee only interviewed Stewart, and didn’t cross-check the information he gave them with other people. Stewart told the committee Gertz conducted the 2013 survey, but Gertz wrote an email saying he didn’t: “Not me, wish it were.” The inquiry committee never interviewed Gertz, even though Stewart admitted he couldn’t provide correspondence from Gertz proving he supplied the 2013 survey data.
Stewart claimed he received two different samples in 2008 from Jake Bratton to use in our 2011 article, which reported 1,184 respondents. I told FSU administrators this wasn’t true, and that Bratton wrote several emails saying it wasn’t true. In one, Bratton wrote:
That survey in the article and those questions are N = 500. The second file sent per my email you cite was a match file of census data to merge on respondent ID based on self-reported zipcode, none of the original data was included. I have no record or recollection of asking that dependent variable in a following survey and TRN [The Research Network] was closed in 1Q 2010.
The committee never interviewed Bratton—it didn’t even spell Bratton’s name correctly in the final report—and it never interviewed me, even though I repeatedly offered to answer questions and provide relevant emails. One might conclude from this that FSU was interested in getting Stewart’s story, but was not interested in verifying whether it was true.
Unfortunately, mishandled investigations appear to be common at universities across the world. After learning of fraud allegations against geneticist David Latchman’s lab, for example, University College London waited over a decade to launch a formal investigation. When the university finally got around to investigating, it found widespread data falsification, concluding that Latchman’s lab had published at least nine fraudulent studies. No disciplinary actions were taken against anyone.
Similarly, a growing chorus of observers have expressed concerns about how the University of New South Wales (UNSW) handled allegations against Levon Khachigian, a professor of medical sciences who has received millions of taxpayer dollars to study cardiovascular disease and cancer. Over many years, different whistleblowers have accused Khachigian of fraud, and six of his studies have been retracted. After UNSW conducted a series of investigations that cleared Khachigian of wrongdoing, the whistleblowers spoke out, claiming the university mishandled their accusations. “I haven’t been interviewed and I was not able to put all my evidence about what happened to the panel,” one whistleblower said. Another whistleblower stated, “the panel told me that they wouldn’t hear my concerns about many of the issues and they were constrained by the very narrow terms of reference set by the university.”
Members of the committees that investigated Khachigian have also begun to speak out against UNSW. One investigator says the university denied his committee access to Khachigian’s raw data. Another claims that the university explicitly forbid his committee from concluding that Khachigian committed research misconduct. Peter Brooks served on one of the investigation committees. The experience convinced him that universities shouldn’t investigate themselves. “I think they do have significant conflicts of interest,” Brooks said.
More than three decades ago, after spending years at the National Institutes of Health studying scientific fraud, Walter Stewart came to a similar conclusion. His research showed that fraud is widespread in science, that universities aren’t sympathetic to whistleblowers and that those who report fraudsters can expect only one thing: “no matter what happens, apart from a miracle, nothing will happen.” It is time for this to change. It is time for independent fraud investigations.