I have often asked my students, “To what extent does government funding contribute to research, leading to the development of new medicines?” Then I give them four choices: 1%, 10%, 20% and 100%.
At a recent conference, an expert extolling the virtues of intellectual property, including how it promotes innovation, concluded his lecture saying, “A patent is like your house” – something you own, protect and don’t just give away.
When I was a student, many years ago, I had learnt about patent protection, especially how it rewards the innovator who came up with a novel idea. However, our teachers as well as other resources, including the terms of the TRIPS trade deal, did little to distinguish between intellectual property rights and monopoly rights.
It was around this time that the world was embarking on a computer and smartphone revolution, with CEOs explaining in videos how their smartness and creativity added value to the market, and how – along the way – they had become the wealthiest people on the planet.
In medicine, pharmaceutical companies are among the most voracious supporters of the patent system because they claim that a patent protects their right to profits not simply because they came up with a new idea but because they undertook lengthy, expensive and risky research to bring life-saving medicines to the market. This is only fair.
But it is not the complete story.
Take a closer look at computers and smartphones. Many of the technologies used in these products actually originated in the Pentagon or in studies funded by the US Department of Defence. However, around the 1950s, when computer engineering began to gather pace, and profits were in the offing, what did the state get?
In 1951, the Eckert-Mauchly Computer Corporation in Philadelphia became the first company to commercially market the computer. But more refinements were required, which scientists at Princeton University, New Jersey, achieved and based on which they built the IAS computer in 1952. Then, based on the IAS, IBM came up with the first mass-produced computer, and later Apple and Microsoft followed suit.
The history of other critical components of the IT revolution, and many of which were also used to make computers, follows a similar path. High-density disks, RAM, Wifi, touchscreens, lithium-ion batteries, etc. were funded by the state but accrued wealth to private entities.
The pharmaceuticals industry hasn’t been different. Modern drug development requires scientists to first identify a target in the body, search for chemicals that can interact suitably with that target, then test them in silico, in vitro, in animals and finally in humans. And then apply for the national drug regulator’s approval.
Almost all medicines marketed today are company products. According to one estimate, companies spend around $2.5 billion to create a single drug.
Over the years, various experts have examined the contribution of public funding to each of the steps of drug development. One analysis suggests that between 1990 and 2007, 9.3% of new medicines were first patented by public-sector institutions, and subsequently licensed to pharmaceutical companies.
That same year, another study showed that nearly 50% of all approved medicines between 1988 to 2007 “were associated with a patent that cited prior art generated in the public sector”.
In the most recent dataset, published last year in the Proceedings of the National Academy of Sciences (PNAS), scientists took a closer look at funding by the US National Institutes of Health for 210 new drugs approved by the US Food and Drug Administration between 2010 and 2016. In their words, “NIH funding contributed to every one” of the new medicines. So there’s the correct answer to my multiple-choice question.
The scientists concluded: “NIH contribution to research associated with new drug approvals is greater than previously appreciated and highlights the risk of reducing federal funding for basic biomedical research.”
Companies spend 10-12 years conducting research and tests to develop a new drug. The scientists behind the PNAS study discovered that the early research that led to the creation and distribution of these medicines between 2010 to 2016 had begun in the 1960s, half a century before.
The common idea is that the pharmaceuticals industry incurs a huge cost by virtue of the inherent risk: out of 10,000 molecules tested, only one is eventually approved. So the one approved molecule has to effectively pay for all the dead-end molecules plus itself, leading to figures like $2.5 billion (Rs 17,815 crore).
However, we are almost never taught that before pharmaceutical companies can build on an existing knowledge base to develop new drugs, many researchers spend many decades using hundreds of millions of dollars of taxpayers’ money to conduct the crucial initial research. This early part is riskier, more protracted and with a chance of success smaller than one in 10,000.
So, a long period of publicly funded research followed by commercialisation, after the hard part is over. Does it sound familiar?
Many universities around the world include intellectual property clauses in their contracts that entitle them to a substantial portion of the revenue generated by patents if the underlying work was produced using university funds (including salaries). In such cases, there is a mechanism by which the state, in the form of the public university, profits off of its employees’ inventions. However, there is a large disparity between the revenues of research institutions and business enterprises through patents.
Some experts would also argue that without patents, companies don’t have an incentive to develop newer and better medicines, and medical equipment. This is agreeable – except it doesn’t include a share for the state, which enabled the research in the first place.
Why is the state so excluded?
Samir Malhotra works at the Post-Graduate Institution of Medical Education and Research, Chandigarh.