“I believe that both researchers and the public are most affected by non-open science practices and irreproducible research.” These are the words of Joshua Wallach, who evaluates research practices at the Yale School of Public Health, Connecticut.
In the last few years, there has been a great push towards making science more open and accessible. One reason is that these attributes can serve as checks. People, including scientists, believe that if researchers have to make their raw data publicly available, they will likely scrutinise their results more carefully. This could ensure that their work is just better than it could otherwise have been.
To be sure, Wallach and two of his peers recently reviewed hundreds of papers published in two different periods to check if the scientific literature showed signs of such openness. The results were promising. “We were very pleased to see some improvements in terms of transparency and reproducibility from 2000-2014 to 2015-2017,” Wallach told The Wire.
Their findings were published on November 20, 2018.
John Ioannidis, who teaches medicine and statistics at Stanford University, California, has been a vocal proponent of making scientific research more credible. Among other things, he has asked for scientists to share their data with their peers to ensure their results are reproducible.
In a famous essay he published in 2005, Ioannidis reasoned that most published research findings were false. One reason, he wrote, was the lack of independent replication studies addressing the same research question, leading to prevailing biases in certain areas of study and conflicts of interest. He interpreted this as a waste of scientific efforts. A 2014 analysis he coauthored estimated that 85% of the biomedical literature was afflicted this way.
Wallach was joined by Ioannidis in the new study. Its third author is Kevin Boyack, who works at SciTech Strategies, New Mexico, a private company specialising in scientometrics.
To understand how reproducible or transparent the published biomedical literature was, the authors authors banked on an older study Wallach and Ioannidis had participated in. It had reviewed 441 papers published between 2000 and 2014. They had found that none of these provided raw data directly. Only one study provided the complete study protocol. Finally, most papers had nothing to say sources of funds or conflicts of interest.
Using these papers as a baseline, the trio examined if biomedical papers published between 2015 and 2017 were more transparent and/or reproducible. They checked for fund sources, conflicts of interest, availability of raw data, use of study protocols and those results that had undergone independent verification. Wallach admitted that these parameters were not the perfect indicators, but he also said they provided a good sense of whether the papers intended to be open.
Of the 150 or so papers the authors assessed, nearly a third did not report funding sources or conflicts of interest. Of the rest, a little more than half reported that there were no conflicts of interest and about 7% indicated conflicts of interest. The authors considered this to be an improvement over papers published in 2000-2014. In that set, about half the papers did not mention any sources of funds and 70% did not include statements about conflict of interest.
Another third of the papers published in 2015-2017 also included supplementary materials including experimental methods, videos and images, although none enabled a replication of the complete protocol. About 20% of the articles indicated any publicly available data compared to only 2% of studies in the older cohort. Only about 5% were replication studies, trying to validate previous results, compared to about 2% in the papers published in 2000-2014.
These results are not particularly surprising, according to Marcus Munafo, a biological psychologist at the University of Bristol, UK. He studies the robustness and reproducibility of scientific research. “I think there is a general trend towards greater transparency, and a greater focus on transparency, but things change slowly,” he said.
“However, it’s worth bearing in mind that there are trade-offs – we could ensure reproducibility by taking fewer risks, and that wouldn’t be optimal,” he added. “So we need to get the balance right: ensuring a degree of high-risk/high-return work, but not without a degree of effort put into replication, confirmation and so on.”
Many efforts are being made to improve scientific research’s transparency and reproducibility. These, at least in part, could be driving improvements seen in the new study. Different stakeholders – individual researchers, journals, funders and the public – have a role to play here.
At the level of individual researchers, initiatives like the Open Science Framework encourage data-sharing and opening up study plans. At the larger level, some journals and funding agencies have started scientists submitted papers to also share their raw data and disclose their funding sources and any conflicts of interest.
Another initiative called Registered Reports peer-reviews studies before the results are known, to eliminate biases, poor research practices and encourage better analysis of the experimental data itself.
“I personally think we’re in an exciting period where scientists are reflecting on how science is done (and using scientific approaches to understand this), and experimenting with new approaches,” said Munafo. “Ultimately, however, what we need is a cultural change where we better align incentives so that what is good for scientists.”
Lakshmi Supriya is a freelance science writer based in Bengaluru.