The following is the first of a two-part essay on publication practices in biology.
“In the first century after the publication of Vesalius’s anatomy and Copernicus’s cosmology (1543), a set of values (emerged to define) science: originality, priority, publication and … the ability to withstand hostile criticism.”
“[At The Frankfurt Book Fair (from 1564), which] accelerated the growth of an international trade in books, … , good facts drove out bad.”
“There can be no replication if there is not some form of publication, or, at least, communication.”
– David Wootton, The Invention of Science, 2016
Science as we know it today – as a practice that uses reason and experimentation to challenge and not merely fill gaps in received wisdom, and make new discoveries that equal or approximate truth – finds its origins in Renaissance Europe. This is not to deny that such efforts ever occurred before the observational and theoretical contributions of the Europeans of the age. Instead, this highlights the fact that a combination of factors, most notably the invention of the printing press and the availability of its products in Europe, led to the rapid dissemination of ideas, resulting in their endorsement and criticism, thus powering a sudden fountain of discoveries that often refuted biblical wisdom and Aristotelian philosophy.
The importance of publishing scientific experiments and discoveries is exemplified (as identified by David Wootton in the book quoted above) by the discovery of the law of falling bodies by Galileo and Thomas Harriot, both astronomers. Harriot “kept his results to himself”. However, Galileo published his results in 1632. This was transformative. Aristotelian philosophy had argued that heavier objects fall down faster than lighter objects, and that all objects fall at a constant speed. Galileo’s theory contested this, saying that all objects accelerate as they fall, and a rock and a feather, dropped simultaneously from the same height under ideal conditions (minus things like the differential effects of air on rocks and feathers), would fall to the ground at the same time.
Many people attempted to replicate Galileo’s findings by performing experiments (which are not easy, as anyone trying to drop a stone and a feather together to rediscover Galileo would realise), and over the centuries gravitation has come to be recognised as a critical force that keeps the universe and our world intact.
In biology, Robert Hooke published a book titled Micrographia (1665) reporting the first observation of microscopic fungi. The Dutch lensmaker Antonie van Leeuwenhoek revealed the microscopic observations of minute protists and bacteria in a series of nearly 200 letters to the Royal Society of London (the first letter dated 1673). Prior to publication, Leeuwenhoek showed his observations to a physician, Regnier de Graaf, who, in writing to the editor of the Philosophical Transactions of the Royal Society (the world’s first scientific journal), endorsed Leeuwenhoek’s microscopes as things “which far surpass those which we have hitherto seen”. These ‘revelations’ were forgotten – only to be rediscovered later in the 19th century in the context of the emerging cell theory of life forms, which underlies much of modern biology.
Science attempts to establish truth, and a central element in doing so is replication. The replication of a discovery is attempted not only by the scientist responsible for it but also by others interested in the discovery. This practice leads to either the establishment or criticism and maybe the eventual rejection of a finding. Critical to this exercise is publication. Not only does publication enable immediate dissemination of ideas but also serves as an (mostly) indelible record of events, which may find traction years, or even centuries, after the demise of its original authors.
The publishing process
What does the publication of a research publication in biology involve today? How does it communicate results to the community of biologists and other scientists? What are its implications for an understanding of scientific results by the general, literate public?
Any self-respecting publication in biology would involve the following. A series of experiments or theoretical studies resulting in a new finding or a discovery. These experiments are written down, by first explaining their context and then presenting their results and their likely place in the space of pre-existing literature, and finally speculating on what these findings might eventually lead to. Everyone has her own style of writing and this is largely respected.
Unsurprisingly, this style has changed over the decades. The 19th century Origin of Species, by Charles Darwin, is a book written in the florid style of a scientific memoir. The classical microbiology literature of the mid-twentieth century adopts a rather elaborate, descriptive style that should appeal to undergraduate students today (but unfortunately is only rarely used in colleges). Modern papers are dense, often packed with more information than is minimally required, and rich in jargon. A short 3-4 page modern paper in molecular biology would take a lot more time to comprehend than a 30-40 page mid-20th century publication.
Once written, manuscripts are sent around to close colleagues for comments. These comments are considered and incorporated where appropriate. Then the manuscript is submitted to a journal for consideration. One or more editors at the journal would check whether the scientific area of the manuscript fits the scope of the journal. Some supposedly ‘high impact’ journals make subjective calls on whether the manuscript describes a finding that is ‘important’ and of ‘sufficiently broad interest’. Many journals are commercial entities and often such ‘editorial decisions’ regarding importance and interest are not necessarily a reflection of the quality of the science in question but more a way of publishing work that fits the journal’s marketing profile.
Manuscripts that pass these requirements are sent to a few experts in the field. Each journal and each editor within a journal will have her own philosophy on how a manuscript should be reviewed and this will reflect in their choice of referees. These referees are normally anonymous to the authors. Based on the comments by these referees, the editor decides on the fate of the paper.
On rare occasions, a paper is accepted as is. Other papers might require textual revisions and even some others newer experiments as required by the referees. Other papers are rejected and may be submitted and eventually accepted for publication at another journal. This process is called journal-initiated ‘peer-review’ and is considered the gold-standard for evaluating the quality of a manuscript. Once accepted, the manuscript enters a production phase, is typeset, proofread and published. The process can take a while, sometimes over a year!
Getting a paper published in this manner is central to scientific careers. PhD students in India should have published at least one such paper as the first named author before becoming eligible for their degrees. Promotions at higher levels are often determined by a scientist’s publication record. But whether the process is as important to the progress of science itself is unclear. Why should it be so?
Among the many nuanced arguments against the present model of journal-mediated peer review is limited sampling and inconsistencies among referee reports. Often, a manuscript is looked at by two or three referees. Would their verdict be a good enough judgement of the work? This may be fine to an extent – as long as referee reports agree with one another and editorial judgements based on referee reports can be fair and justified. But this is not straightforward. Often, referee reports conflict with one another and a decision to publish or not is the editor(s)’s call, much like the ‘umpire’s call’ in cricket’s DRS. Again, the editor’s worldview of what constitutes a valid publication can determine the paper’s fate.
Woe befall the author whose philosophy on these matters differs from that of the editor! Here is an example of excerpts from two referee reports that I got for my own paper:
Referee 1: “The paper is extremely well-written with good data and cogent arguments. I would recommend that it is accepted for publication without changes.”
Referee 2: “The manuscript not only lacks supportive functional data but critical controls … are also missing. In the present form, it is not suitable for Journal of XX, which requires technically clean experiments along with some additional functional data to substantiate the claims.”
Imagine the conflicts that the editor’s mind must have gone through before initiating processes that eventually resulted in the paper being published. This is certainly not a one-off example.
Peer-review is not always free of bias. The institutional affiliation of an author might be a turning point in whether a referee likes a paper or not. For example, Richard Smith (a former editor of the British Medical Journal), in his review of peer-review, quotes a study which investigated 12 papers published in leading psychology journals by authors from reputed institutions. This study submitted the same papers to the same journals after making a few cosmetic changes and changing the author affiliation to invented names such as “Tri-Valley Center for Human Potential”.
For nine of these papers, the journals did not recognise that they had already published the paper; and of these, eight were rejected on technical grounds! Richard Smith mentions that he himself wanted to reject a paper by Karl Popper, the great philosopher of science, but could not: “… the power of the name was too strong”. In this case, his original intuition was thankfully wrong.
More importantly, the process is time consuming. Even with email speeding up communication among the authors, editors and peer-reviewers, it can be a year before a submitted paper gets finally accepted for publication in journals of ‘repute’. In the modern day, when technology has accelerated research manyfold, one year of an important manuscript lying in cold storage can be a criminal waste of time and taxpayers’ money. Such delays can also play havoc with the careers of unlucky, over-running, poorly paid yet highly-talented PhD students.
At the end of the day, does peer-review help weed out bad science? It certainly produces more reliable reading material than every person laying bare her deepest thoughts for everyone else to see in the form of written documents. Nevertheless, a cursory glance at the website Retraction Watch will show that many fraudulent or innocently wrong papers make it past journal peer-review, even in major journals. This list represents only a fraction of research that would not have passed a (probably non-existent) foolproof system of evaluating science. In a systematic study, the British Medical Journal deliberately inserted errors into manuscripts, which were then sent out to many referees. To quote Richard Smith again, “nobody ever spotted all of the errors”.
Thus, journal-mediated limited peer-review may be better than nothing – but it is time-consuming, inconsistent and, ironically for science, has limited – if any – evidence supporting its contribution to selecting good science and eliminating bad science. Nevertheless, it had a role to play when journals were all in print and competing for subscription real estate, but today it may be little more than a vestige of the print era.
Part II will take a look at new, internet-savvy ways in which biological research is being reviewed and published.
Aswin Sai Narain Seshasayee runs a laboratory researching bacterial biology at the National Centre for Biological Sciences, Bengaluru. Beyond science, his interests are in classical art music and history.