The Sciences

How Cut-Throat Competition Forces Scientists to Act Against the Collective

The paucity of three types of resources in the hierarchy of scientific research has made the competition for grants and rewards cut-throat and the research enterprise flawed.

Brian Keating, an astrophysicist who led the infamous cosmic inflation announcement in 2014, thinks this is how science works: “… you put out a result, and other scientists work to test the result”. However, his own story shows that this is a cute ideal that’s often unreasonable to expect on ground. That scientists are often not putting out results and expecting others to test them as much as rushing to announce results to scoop others even if they don’t yet have enough data to make their claims.

In fact, there are many supposed truisms about science put to the test every day but whose results we choose to ignore because it would be easier if it were “self-correcting” and “objective” instead of us having to confront the very real possibility that science’s autocorrect works across decades, not years, and the truths it uncovers are objective insofar as they are not pursued by scientists wondering about what will get them published, famous, well-funded and rewarded.

This malaise is not specific to Keating and his team; it applies to all scientists because these ambitions are motivated by a flawed administration of science, increasingly emulated around the world as states rush to increase their scientific “output”. And it is when you compare the methods of this administration to what people think science is that you that you realise you’re practically harbouring a cognitive dissonance.

Now, Keating was working with a single team of scientists (that he was leading) on a single instrument, the BICEP2 telescope near the South Pole. On the other hand, many major discoveries of this century are expected out of ‘Big Science’ collaborations: teams of people working together to study a natural phenomenon and obtain a common result, and other teams to replicate that result. The Large Hadron Collider (LHC) is a famous example. The collective faculties of its 3,000+ scientists and engineers are necessary to operate the machine and its five detectors and analyse the data to present meaningful results.

However, the LHC has always been an ‘easy’ example that lets off the person quoting it from having to grapple with the numerous other collaborations that don’t work the way the LHC’s does. Every time there is a paper written by engineers at CERN, the European laboratory that hosts the LHC, based on LHC data, the names of all the people involved in the experiment to which the author belongs are listed as coauthors. For example, in 2015, the CMS and ATLAS experiments at the LHC jointly published a paper, about a more sensitive mass measurement of the Higgs boson, with 5,154 authors.

The members of the Planck space telescope, on the other hand, had refused to share data with the BICEP2 team for whatever reason. Keating had reasoned, “Either [Planck] didn’t have the data we wanted, or they did have it and they were going to scoop us.” The Planck team would the relevant part of parts the data later that year, but in the meantime, Keating et al courted public adulation in an effort to cement their candidacy for a Nobel Prize.

Two sources of conflict

This ‘pursuit of the scoop’ is fascinating because it describes a vector of action within scientific research that most of us typically don’t account for, yet which seems to influence the outcomes and communication of research in significant ways. Researchers believe the Nobel Prize is the highest honour, whereas The System is not efficient enough to recognise their every effort in the right context, so they decide the only way to go is to take risks and cut ahead.

Last week, science journalist Jennifer Ouellette described another arena where a quarrel for credit had been unravelling the same way it did with the BICEP2 experiment.

In August 2017, the twin Laser Interferometer Gravitational-wave Observatories (LIGO) and 70 telescopes around the globe tracked a neutron-star merger. It was the world’s first demonstration of multi-messenger astronomy, where multiple instruments study the same phenomenon in multiple channels (electromagnetic and gravitational) to understand its evolution through different laws of physics. The results of the studies were announced by the LIGO Scientific Collaboration in October 2017 with much fanfare (warranted because neutron-star mergers are spectacular in many ways).

Between August and October, however, the portion of the astronomy community caught up in the analysis and follow-up observations was going nuts. The merger had been ‘observed’ through three events that were detected thus: gamma rays by space telescopes, gravitational waves by LIGO and the kilonova explosion by ground telescopes. LIGO has had a habit of checking its observations repeatedly before making an announcement to the public – while, according to Ouellette, astronomers have gone the other way, having had no reason to wait before being able to claim a discovery with sufficient confidence. This was one source of conflict.

The other source was the familiar one of primacy. All members of the collaboration had been keenly aware that Kip Thorne, Barry Barish and Rainer Weiss had received the Nobel Prize for physics in 2017 following LIGO’s first announcement of the detection of gravitational waves in 2016. And the members wanted to make sure their contributions to the final announcement were properly acknowledged so they would remain in contention for future rewards, of which there were potentially many.

Ouellette writes,

According to [Josh Simon, an astronomer in Chile], things got messy after he and his colleagues spotted the kilonova and identified the host galaxy. Five other teams detected the event in their images within the next hour, and it wasn’t clear whether those teams spotted the kilonova before or after Carnegie’s announcement. This in turn sparked a lively debate about how much credit the subsequent teams should receive. …

The debate over credit extended to who should be listed as authors on the primary omnibus paper describing the discovery. LIGO made a good-faith effort to be as inclusive as possible, but hackles were raised over how the collaboration defined what constituted a “unique” contribution or discovery. In the end, the omnibus paper had two tiers of co-authors. The first included the six groups deemed “the discoverers,” with the second tier comprised of those who did the follow-up work and analysis. Even so, “there were a lot of people in that second category who thought they should have been in the first category because they did make a first or unique contribution,” [LIGO spokesperson David Reitze] says.

We have frequently derided Indian ministers for being so obsessed with the Nobel Prize – it is a silly obsession – but the LIGO and BICEP2 tales demonstrate that as a feature it is not unique to India or China. Everyone wants a Nobel Prize.

The Nobel intent

However, there are two different cultures at work here, even though both their followers will kneel at the same altar. In India, for example, ministers dream of Indian scientists winning a Nobel Prize but their actions haven’t always been consistent with their desires. In the US, for another example, the infrastructure for good research is already in place but unless operation expenditure is hiked, the community will undeservingly suffer from the effects of overcrowding. As Ouellette said,

… astronomers tend to cluster in smaller, independent groups, and they are fiercely competitive, vying both for limited funding and for precious time on the world’s limited number of telescopes. Being first to report a breakthrough observation is hugely important to most astronomers. (emphasis added)

Such spending is also tied to the US’s consideration of itself as the world’s “leader” of scientific research. American scientists have won the most Nobel Prizes in the last century – but that was a century when the US was truly the research leader. It has dropped the ball of late and the effects will surely show a few decades from now in the Nobel Prize count.

Of course, in both cases it must be acknowledged that the Nobel Prize symbolises power. For the individual, it is power in the form of acknowledgment of work. For the institute, it is power in the form of access to funds. For the country, it is power in the form of prestige. It hasn’t mattered if the way the prize is awarded is flawed; the cultural and historical cachet it still caries is astounding, prompting two weighty collaborations to almost unravel in its pursuit. We must acknowledge that this is how science works. There are likely to be other team-efforts worldwide where individual desires have superseded community goals.

This also does not make LIGO’s way of doing things better – or even the LHC’s for that matter. A habit of giving everyone on the experiment credit means the person who actually did any work relevant to the results of the paper gets the same amount of credit as a fresh PhD student. Peter Coles, a theoretical cosmologist at Cardiff University, has called this “absurd”. Panjab University used this flaw to its advantage in climbing through global university rankings because its scientists had been listed as coauthors in numerous papers published by LHC experiments.

It is clear that the ultimate fix to this problem will have to ensure that all work is properly acknowledged and, if necessary, rewarded. David L. Clements, an observational astrophysicist at Imperial College London, commented on Coles’s post, “More permanent contracts, a less publication-fixated funding environment, and more money in the field, reducing the level of cut-throat competition, would help, but can you realistically see any of that happening?” It is becoming clear that the essential animus of a hyper-competitive environment is rooted in the availability of three types of resources at certain levels in the hierarchy of scientific enterprise: evaluators, methods to ensure fairer evaluations and acknowledgments (assuming we already have the resources to conduct research).

The more evaluators there are, the more evaluations that will happen (vertically, horizontally or both). They must not completely rely, or even over-rely, on reductive proxies like the impact factor or h-index to judge candidates’ performance. Instead, they must be able afford (and not be blindly expected to perform) qualitative tests, such as speaking to a candidate’s supervisor to understand her contributions better and assess her work by actually reading her papers. Once evaluations are complete, all candidates must be rewarded. This encompasses suitable rewards made available in a timely manner. Without these measures, however, participants of a competition will have few reasons to believe it will be fair or empathetic.

A recent Twitter conversation (below) between Mukund Thattai, a biologist at the National Centre for Biological Sciences, Bengaluru, and Shailja Gupta, an adviser and scientist at the Department of Biotechnology, teased out the nuances of this issue. In particular, it highlighted the need for a two-way collaboration between the scientific community and the Government of India.

Join The Discussion