Do journals do a good job of finding appropriate peers to review papers? Are editors always in the best place to decide the fate of a paper based on a severely limited sampling of peer reports?
The following is the second of a two-part essay on publication practices in biology. The first part dealt with peer review’s origins and its role in weeding out bad science.
Peer review, in which papers written by scientists and submitted to scholarly journals are reviewed by two or three other experts in the field before being published (or rejected), is the gold-standard for evaluating science. However, its effectiveness in rapidly disseminating good science and eliminating bad has recently come under the scanner. Gender, institutional affiliation and other human fallibilities introduce unwelcome biases in the peer-review process, further raising uncomfortable questions on the very foundation of scholarly communication. In light of these concerns, where does scholarly communication – especially in the life and biomedical sciences – stand today?
In the pre-internet era, journal-mediated limited peer-review was certainly better than nothing. Print space was (and is) a premium and journals had to find ways to be selective about what gets shown to the world. Peer review is a way to ensure that they publish only what they would like to publish and so establish their reputation as a purveyor of gourmet science. However, the internet and cheap digital storage is changing the way papers can be peer-reviewed: by all peers and, in an ideal world, by a literate public. At least two different streams of thought and action in publishing is aimed at this: open access to scientific literature and major reforms of the peer-review process.
Traditional scientific journals are subscription based. Subscription to these journals, unlike those to popular magazines, were – and are – expensive, and beyond the pockets of most individuals. Therefore, access to these journals have been through institutional subscriptions. This has meant that only those working in science could read these journals (e.g. through their institution’s library). Even those with sufficient scientific literacy but not pursuing science as a profession could not easily access these journals. Worse, authors and their employers had to give up copyright to the journal publishers. The fruits of science, usually paid for by the taxpayer, rarely reached a large majority of its funders.
Referee reports and opinions
However, in the last 15+ years, a new breed of open access journals have emerged. These make all publications openly available online for anyone to read and reuse. Authors retain copyright. The catch is that authors or their institutions have to pay for publication. It is quite expensive to publish in these journals. It should be noted here that the ability to pay does not guarantee publication in legitimate open access journals.
It has been argued that the cost of publishing a paper in an open access journal is generally less than half what it would cost to access the article by subscription. With many countries and funding agencies mandating that taxpayer-funded research be made available in open forums, these publication costs are also taken care of. Money matters aside, there have been goods and bads to this from a purely science communication perspective.
The most obvious advantage of open access is that it enables all interested parties – with access to the internet – read, appreciate and criticise what they want. Presumably as a result of its potential for wide reach, many open access journals also require the authors to write a ‘layman summary’ of the paper, highlighting the importance of their work. One can always have a debate over the quality of these descriptions but there’s no denying that open access has delivered an incentive for scientists to break away from obtuse jargon and make their work intelligible to the wider world.
An unfortunate consequence of the open access movement is the mushrooming of what are called predatory journals, many of which are fly-by-night operators, in which anyone can pay to publish. Little that is published in these ‘journals’ passes muster. While mainstream science at the best research centres and universities across the world has very little to do with such publishers, it is rather unfortunate that many researchers in universities in the country often fall prey to the charms of publishing in such venues. The terms of reference for the rest of this article are scientists who do not feel the pressure to either be so gullible or stoop low enough to publish substandard or even fraudulent work in these places.
Many open access journals, published by legitimate publishers and leading scientific societies, have started experimenting with alternative peer review models.
Arguably the loudest argument against the predominant model of journal peer-review is the lack of transparency. Papers get published and scientists often wonder how something passed peer-review. How did the referees miss this or that? Something fishy must be going on. The journal must have found inappropriate referees or had other compulsions to publish the paper. These criticisms in most part go little beyond gossip-column conversation, but do express genuine feelings, justified or not. Secondly, reviewers are generally anonymous to authors, though most authors claim to have done some detective work to figure out who said what in their referee reports. It has been argued by many that anonymity is a cloak behind which vindictive referees hide and write reports that demand the world of the authors or those that spew vitriol. If the editor believes in taking referee reports at their face value, then such reports could signal a paper’s death at that journal.
To address the issue of transparency, several journals (including leading ones like EMBO Journal and newer ventures like the high-profile eLife and the under-the-radar PeerJ) have instituted the policy of publishing referee reports, the authors’ response to these comments and the editor’s decision along with the paper (in some cases only with the permission of the authors). This provides readers newer perspectives to inspect and interpret the evidence presented in a paper. A few journals mandate, or at least encourage, referees to sign their reports. The idea behind this is that having their names published along with their reports, by offering them a degree of ownership over the published article, will encourage referees to be careful and constructive in their criticism. A few journals (PeerJ for example) even assign a digital object identifier (DOI) to each referee report, thus enabling future papers to specifically cite these reports. This adds value and recognition to the referee’s work.
We had seen (in Part I of this essay) that editors often deal with conflicting referee opinions. This is largely because papers are generally complex, attitudes on what constitutes good and solid science are diverse, and it is natural that opinions differ on the worth and validity of a paper. Each referee operates in her own well and does not communicate with others reviewing the same paper. In fact the identity of the referee is unknown to everyone except the editor and the journal’s editorial staff. The recently launched journal eLife, edited by Randy Schekman, besides publishing referees’ reports also encourages discussion among referees and aims to provide a single consolidated report to the authors, which should make the author’s life comfortable. It is understood that eLife also pays its referees, which is novel at least in biology – where scientists referee papers for little material advantage.
New journal models
Peer review in letter and spirit is not a bad thing. The best way – however riddled with flaws it may be – to evaluate something as complex and as specialised as modern science is by peer review. The question is whether peer review mediated by journals and their editors is the way it should be done. Do journals do a good job of finding appropriate peers to review papers? Are editors always in the best place to decide the fate of a paper based on a severely limited sampling of peer reports?
A little over ten years ago, leading experts in bioinformatics and evolutionary biology, Eugene Koonin and David Lipman (also the Director of the National Center for Biotechnology Information at the National Institutes of Health, the leading repository of all public biological data) launched a journal called Biology Direct. This essentially took everything except copy-editing and publishing out of the journal’s control and placed the responsibility of getting a paper peer-reviewed and accepted for publication in the hands of the authors themselves.
For any paper to get accepted, its authors have to get three scientists in their field to agree to refereeing the paper. If the authors could do this, the journal has no problem publishing the paper, irrespective of the referee’s opinions. The catch, of course, is that all referee reports and author responses will be published, and if the authors do not feel comfortable doing this, the paper is automatically rejected. The names of the referees will also be public, which is a disincentive for authors to get undeserved pats on their backs from their best friends acting as referees. The journal has been motoring along rather nicely and, consistent with the scientific profile of its patrons, has found some traction among bioinformaticians and evolutionary biologists.
The American Society for Microbiology partly replicated the Biology Direct model and launched a new journal called mSphereDirect (within the ambit of mSphere, which follows the traditional peer-review model) just a couple weeks ago. Here again, authors choose their referees, get in touch with them, get the paper reviewed and address referee comments to the best of their abilities. Then they submit the paper along with all relevant refereeing documents to the journal.
The journal believes that editors do have a role to play in ensuring the publication of quality science. Therefore, it deviates from the Biology Direct model by placing the responsibility of accepting a paper for publication – taking all referee reports and the authors’ response to these reports – in the hands of the editor(s). We await the publication of the first papers in this journal. It should however be noted that this model is not new to this publisher, which has for a while been offering author-directed peer review as a privilege to certain leading scientists who have been elected as Fellows of the American Association of Microbiology, one of the manifestations of a ‘VIP culture’ in science. mSphereDirect should be seen more as an effort towards the democratisation of a privilege.
Need for journal-mediated peer review
The internet has transformed our lives, and has also transformed the way science is published and reviewed. Most modern journals allow readers to comment on articles online; never mind that there are very few takers. We often find scientists expressing their 140-character-long opinions on papers they like (and dislike) on social media. In fact, many journals display Altimetric statistics, which show how often a paper has been mentioned in the social media or on blogs. While these ensure that science gets reviewed broadly after publication, they lack depth.
Dedicated ventures for post-publication review include what is called the Faculty of 1000 (F1000), in which a group of scientists review, rate and recommend papers that have been published. Often scientists whose papers have been recommended on this forum highlight this recognition. However, this still is limited peer-review and more limited than traditional peer review in the sense that most papers – irrespective of their quality – would not be read by the F1000 group. Therefore, being discovered by F1000 is not necessarily an indication of a paper’s quality or importance.
Similarly, a website called PubPeer publishes post-publication reviews of papers. It has been in the news because reviewers on this forum have detected malpractice, such as fraudulent image manipulation, in several high-profile papers. This has also brought lawsuits against the website. Referees are anonymous and this has been criticised by many.
Finally, do we need journal mediated peer-review at all? Most self-respecting scientists discuss their work with, and show their paper drafts to, their peers and to the extent possible address concerns raised during this process. The only way to show that journal-mediated peer review improves science is by demonstrating that papers published in peer-reviewed journals are identifiably better than those that are not peer-reviewed. This is a long ask and such evidence does not exist. There are over 30,000 peer-reviewed biomedical journals indexed in the National Institutes of Health’s PubMed database. This is a large number. These are all peer-reviewed journals and span all shades of grey in quality. Every issue of any journal of repute will carry papers of differing standards (judged subjectively of course).
If a practising scientist were to be shown two scientific articles, one peer-reviewed and the other not, would she be able to identify which one was peer-reviewed and which one was not? This question is best exemplified by the following sentences drawn from Richard Smith’s article on peer review (referred to in Part I):
Robbie Fox, the great 20th century editor of the Lancet, who was no admirer of peer review, wondered whether anybody would notice if he were to swap the piles marked `publish’ and `reject’. He also joked that the Lancet had a system of throwing a pile of papers down the stairs and publishing those that reached the bottom.
When I was editor of the BMJ I was challenged by two of the cleverest researchers in Britain to publish an issue of the journal comprised only of papers that had failed peer review and see if anybody noticed. I wrote back `How do you know I haven’t already done it?’
So, can legitimate science be published and not be peer-reviewed? Biology look a leaf out of physics when the journal Genome Biology launched a pre-peer-reviewed article deposition service over 15 years ago. The idea behind these ‘preprint’ servers is ensuring that the results of scientific endeavours are not held hostage by delays inherent to journal-mediated peer review. Not only does this ensure rapid access to the fruits of science, it also enables scientists to take credit for their work before delays in journal peer-review can allow competitors to “scoop” them. Genome Biology was ahead of its times for biology, and the effort failed to take off and was shut down. But preprint servers in biology have found a second wind in the last couple years with the launch of BioRxiv (to mirror physics’ arXiv). Tens of papers are deposited here every day and for many biologists, this has emerged as a major resource for finding scientific literature relevant to their interests.
Attitudes to publication in biology have undergone and are undergoing sea changes. Many biologists still do set faith in the idea that publication in high-impact (as measured by contentious numerics) interdisciplinary journals is the high mark of success in science. Many others disagree, arguing that the intrinsic value of any scientific effort cannot be determined by where it gets published. Science and the efforts of its practitioners should be evaluated by what is reported, whether it be a well-defined ‘scientific story’ or a bunch of eclectic new data and observations that may come into their own at some point in the future, and not where it is published.
Aswin Sai Narain Seshasayee runs a laboratory researching bacterial biology at the National Centre for Biological Sciences, Bengaluru. Beyond science, his interests are in classical art music and history.