In summing up the problems with academic publishing, I’d choose two words: perverse incentives. Of course, academics must show that we’re using grant money to produce useful research. But academic institutions and grant-awarding groups evaluate academics’ success primarily by counting the number of publications we put out each year and, in some fields, the number of times our publications are cited by others. These numbers are not always an accurate proxy for an academic’s contribution to the field, and the narrow focus on numbers feeds a publish or perish mentality. This is not good for academia or the public—or for the efficient allocation of grant funds.
Who Counts as an Author?
Publication numbers don’t always reflect academic merit. Many papers list multiple authors, and first-listed authors are sometimes assumed to be the primary authors of the paper or primarily responsible for the research. However, these assumptions can be incorrect. Norms about who gets author credit and how authors are listed vary from group to group, and from field to field. In my field, legal scholarship, when papers have multiple authors, they are often listed in alphabetical order by surname. (I thank my father for the surname Barnett). And, as is the norm among legal scholars, I claim authorship only if I have written a portion of an article or book (and I note the percentage of my written contribution in each publication I list on my CV).
STEM fields use different criteria for allocating authorship. In STEM fields, the coveted first-author listing is usually given to the person primarily responsible for the research, and there can be arguments over who that person is. In addition, it is common for the head of the lab hosting the research to be listed as last author—and for others to be listed as authors because they developed some technology or methods used in the research—even though they had no direct involvement in the research or writing. And because one of the authors is usually designated as the corresponding author (to whom questions should be directed), some readers may assume, perhaps erroneously, that this author is primarily responsible for the research. These variable practices not only misleadingly inflate people’s publication numbers—they also obscure who is responsible for the research, and in what proportion, which may be particularly problematic if the research turns out to be flawed and there’s a need to determine responsibility. There have been attempts to deal with the difficulties of authorship conventions in STEM, but no consistent practice has emerged.
Publish or Perish?
When success is judged by number of publications, then of course academics wish to publish as much as they can, get author credit on multiple-author papers, and show they have impact. This creates an incentive for academics to focus their research primarily on what will probably result in publishable and often-cited papers, rather than on the questions they find most compelling and important. And more is not necessarily better. In recent years, the publishing imperative has resulted in the publication of so many articles that it’s impossible for any scholar to keep up with her field by reading all the relevant ones. It can also push some people to overhype their work, introduce bias, work carelessly and, sometimes, commit outright fraud. Stuart Ritchie outlines these problems in his excellent book, Science Fictions. For example, he describes one practice he calls “salami slicing”: chopping up parts of what could be a single paper into multiple papers and publishing them separately.
Measuring Impact
Academics are also expected to show that their publications have had an impact—generally by pointing to the number of times those publications have been cited in other works. This, too, creates perverse incentives. For example, peer reviewers sometimes try to increase the number of times that their own work is cited by demanding that an author cite their work in her paper; some journals in specific disciplines will not publish a paper unless its authors agree to include citations to articles previously published in that journal; some academics attempt to boost citation counts by citing their own previous publications. (There are legitimate reasons to cite one’s own work—for example to reference a more detailed discussion one has undertaken elsewhere—but I do not count my self-citations when I apply for a promotion or a grant.)
Nor are the number of citations a piece receives necessarily an accurate proxy for its impact, as Neil Duxbury has observed in Jurists and Judges: An Essay on Influence. For example, Ronald Coase produced the “most cited” law review article ever published, but noted subsequently that many of the citations attacked his views. Some pieces are cited for routine purposes rather than because they establish something impactful. And, since a piece becomes more visible once it has been picked up, being cited tends to beget more citations (reflecting the “Matthew effect of accumulated advantage.”)
Predatory Publishers and Pricing
An unfortunate result of the perverse incentives created by the publication imperative is the proliferation of certain journals often referred to as predatory publishers. After a piece has been accepted, these journals demand that the author pay them a fee to publish it. They exercise little or no quality control over which papers they accept; some have unwittingly accepted nonsense papers that were hoaxes designed to demonstrate their lack of quality control. Every few weeks, I get an email inviting me to submit to journals of which I’ve never heard. I tend to suspect that most are from predatory publishers. I have not fallen for these invitations, but some do.
Which Publications Count as Evidence of Academic Success?
Another problem is that, even though practitioners need scholars to summarise the current state of the research in their field, scholars are disincentivised to produce these works because universities and grantmakers often don’t count them in measures of productivity. In my field, practitioners and courts of law often consult legal textbooks and articles in professional law journals (as opposed to academic law journals). However, academics don’t tend to value such works when making judgements about a scholar’s research contributions, unless they appear in top international journals or are published by prestigious international publishing houses. And when academics count up the number of a scholar’s publications, they exclude textbooks—their material is not thought sufficiently cutting edge to count as research—which is surprising, because the law constantly changes, and practitioners regard such works as extremely useful for keeping up with those changes.
In order to produce work eligible for publication in prestigious international journals, which could therefore count as an academic publication, I have become conversant with the law of several common-law jurisdictions besides that of my home base, Australia—including England, Singapore, Hong Kong, India, New Zealand, the United States and Canada. Luckily, I enjoy learning about comparative law, but what of those who wish to focus only on Australian law? Their options are more limited.
There is a disjunct between what the Australian legal profession wants (articles that focus on Australian law, and clearly written, reliable textbooks) and what academics need to produce in order to get research grants and advance their careers (novel theoretical analyses focusing on international jurisdictions). There is surely room for academia and grantmaking groups to value both kinds of work product: each has its own role and importance.
Academics’ career success is dependent upon other academics who assess the value of our publications. This inadvertently risks incentivising the development of cliques: academics with a certain view may tend to promote only the publications of other people who share that view. And, as Ritchie notes in Science Fictions, if academic journals are colonised by academic cliques whose editors and reviewers seek to advance a particular view, it becomes difficult for a scholar with a different view to get published in that journal, and it is sometimes impossible to get published elsewhere, or in a journal of equal repute.
Journals Are Founded on Voluntary Labour, yet often Paywalled
Academics aren’t paid for journal articles, nor are those who review prospective articles. Often editors of journals are also unpaid. The only reward is more roles (and work!) or more publications to add to one’s CV. Thus, the system depends heavily on voluntary labour. And yet many journal publishers charge readers a significant fee for access to their academic articles. While there has been a move towards providing open access to some journal articles, it’s by no means the norm. Thus, academics are producing research intended for the public good—and being incentivised to produce large amounts of it—but our research is often unavailable to the public—who, as taxpayers, often paid for it in the first place through government grants.
The Peer Review Process
In addition, the peer review system is at least partly broken. But even if all peer reviewers evaluated papers objectively and without being influenced by personal bias (which is far from the case), the system would still be flawed: because of the sheer volume of submissions, journals have to reject so many articles that they can’t use quality alone as a criterion for rejection. This gives them an incentive to reject articles on the slightest of pretexts.
We need to rethink the way in which academic publishing works, and the incentives that universities and granting groups produce as a result of their review process. More publication is not necessarily better.
Please, excuse this post. It was meant for another article! I don’t know why it was posted here (nor how delete it). However, I will be reading your article, it looks excellent.
As a regularly journal-published author of peer-reviewed science review papers since 2009, I’d add re peer review the issue of extremely bad faith by some reviewers. In amongst constructive reviews are those that betray either a failure to read the paper at all or to willfully misrepresent it — to presume positions not held at all, and to then attack a pure straw man. In these extreme-ideological times this seems commonplace. Another issue is that a reviewer is not up to date with relevant research or has huge gaps in knowledge. This is a big problem re inter-disciplinary review papers, such as are mine. The reviewer may wish to demonstrate own grasp of a corner of a field by insisting his/her insights are added to the paper purportedly to improve it when actually they are wholly irrelevant. It is not infrequent, in my experience, to consider making a formal complaint… Read more »