Seven IITs have announced plans to boycott the THE World University Rankings this year ostensibly because they are not convinced “about the parameters and transparency” of the ranking agency’s process, according to a joint statement issued by the directors of the IITs at Mumbai, Delhi, Guwahati, Kanpur, Kharagpur, Madras (Chennai) and Roorkee.
However, the seven institutes – proclaiming themselves to be “leading IITs” – are only creating a smokescreen, seeking cover for their poor performance in the global rankings. Nothing demonstrates the hypocrisy of their protestations more than the fact that they embraced the QS World University Rankings, in which they perform marginally better, even though the QS methodology is equally, if not more, objectionable relative to that of THE.
Almost everyone who recognises how educational rankings work also recognises that they are imperfect and increasingly just marketing ploys for the rankers and the universities, instead of being genuine measures of relative performance of institutions. Every ranking methodology is imperfect, and the seven IITs would be justified in rejecting them all, as indeed some highly reputed global institutions do. But cosying up to the QS rankings while rejecting the THE indicates that their real gripe is their performance and not the methodology against which they inveigh.
Consider four specific complaints they have raised against THE:
- It does not share its data, so it cannot be cross-checked or verified;
- Information about its teaching and reputation surveys is inaccessible and it is not transparent about respondents and their geographies;
- Unlike QS, THE doesn’t seek any inputs from institutions about the respondents to its surveys; and
- Perception about a university accounts for 33% of the total weight.
On their face, these are all legitimate and plausible criticisms of THE’s methods. However, they are either equally or even more applicable to QS’s methods as well. The QS rankings’ data is not publicly available either, and so can’t be “cross checked and verified.” And if THE allocates 33% of the score to reputation surveys, as these institutes point out, QS allocates even more – 50% – to the same category.
Worse, the only matrix QS uses for teaching quality, which accounts for 20% of the overall score and 100% of the teaching performance score, is the student-faculty ratio. That’s right: an institution’s teaching performance in QS is judged solely on the basis of the number of students versus faculty members.
If you have a ratio of 4.2 students per faculty member, as the University of Copenhagen does, you receive a 100% teaching score. If you fall to 10 students per faculty – which is the situation in IIT Bombay – you get 46%. Drop to 13.4, like IIT Delhi, and you dip to 23%. If your student to faculty ratio is 17.7, as it is at IIT Roorkee, you fall under 12%. But do any of these IITs genuinely believe that IIT Bombay’s teaching performance is four-times superior to that of IIT Roorkee simply because the former happens to have fewer students to faculty members?
Indeed, is even the number of academic staff that the institutes claim they employ accurate? IIT Roorkee reported 444 academic staff; IIT Bombay recorded more than double, at 996. Manipal University is rated the highest in India in teaching performance, above all the IITs and IISc, at 53% solely because its student to faculty ratio happens to be 9.3. Does that make sense?
Incredibly, the ratio is likely a function of how the institutes reported or misreported their own data, because QS permits a wider range of who can pass for academic staff. Apart from traditional faculty members, tutors, research fellows, postdoctoral researchers, etc., anyone who teaches for at least three months in a year can count.
By contrast, in THE’s ranking methodology, the student-faculty ratio makes up only 4.5% of the total score and under one-sixth of the teaching quality measure, which includes other more credible dimensions like proportion of doctoral students, number of PhDs per faculty, institutional income, etc. The latter are all far superior determinants of quality than just the ratio of students to (loosely defined) faculty members. To its credit, THE limits the definition of academic staff very rigorously to full-time or full-time equivalent teaching staff only.
But by far the most disagreeable QS practice is allowing academic institutions to seed the lists from which QS conducts its reputation surveys. That is, institutes are permitted to submit 300 academicians’ and 100 employers’ names to QS to conduct its academic and employer reputation surveys, which account for fully 50% of the ranking. Imagine that: You get to nominate who should rank your institution.
It’s a remarkable marketing ploy for QS, which receives hundreds of thousands of nominations from academic institutions as a result. In fact, it has built up a bank of almost 2 million names for its lists. Nominees remain eligible for three years as does their vote in any given year, so an institution is able to rack up a potential bank of almost 1,200 votes.
But QS is as opaque about the number of votes that individual universities receive and how that translates into their scores, which in turn constitutes 50% of the overall rating of an institution.
Internationally reputed universities like those at Harvard and Cambridge, and the Massachusetts Institute of Technology, no doubt receive thousands of nominations from the 95,000 nominators, for the three years over which nominations are counted. But beyond the top 100 or so universities, the reputation score could be separated widely on the strength of as few as a dozen votes, and lower in the tail by as few as one or two. We will never really know because QS is not transparent about the reputation votes that institutions receive.
What we do know is that relatively obscure universities show a spike in rankings when they undertake promotional campaigns to raise their public profile in academic circles, including by sponsoring QS events. The institutions then nominate the attendees for inclusion in the QS voting roster and lobby them both within QS-approved guidelines as well as outside them for their votes. These institutions are typically financially plush and marketing savvy, but academically flimsy, and they’re stacking QS lists with tens of thousands of nominees who have the potential to skew results with just a handful of votes.
That is the type of “inputs from institutions about the respondents” that the seven IIT directors have now put themselves behind. They have also complained that THE’s rankings and the weights it assigns to various components are not objective and that it focuses on a very limited number of attributes. Absolutely – but this criticism applies equally to all ranking systems.
The painful truth is that university rankings, both global and national, are beauty pageants unworthy of academic institutions. Long-ignored within the country, it is unfortunate that the educational establishment, principally at the behest of policymakers, is increasingly besotted with them. Each ranking tells us a little something selective, something different, and only of some segments of a university’s enterprise. None of them calls the full or best tune at the annual prom at which all universities are called upon to perform.
In 2013, a 300-page UNESCO report entitled ‘Rankings and Accountability in Higher Education: Uses and Misuses‘, reviewed university rankings and cautioned:
“Obsessing about joining and climbing a league table or becoming ‘world-class’ ignores the greater role, purpose and mission of higher learning institutions” which are the “pursuits of educating and nurturing learners hungry for knowledge and skills; of contributing to the development of human and social capital; and of undertaking important research for sustainable futures.”
Instead of trying to camouflage their performance and tip-toe to more favourable grounds, the seven IITs should strive to educate themselves, policymakers and the people better on the methods and limitations of ranking systems. They should focus on what they ought to really be about: learning, human development and truth. Not a portion of the truth – all of it.
QS World University Rankings | THE World University Rankings |
Academic reputation (40%) – Based on survey of educators, drawn in part from up to 300 nominees proposed by each institution as well as QS lists | Teaching (30%):
|
Employer reputation (10%) – Based on survey of employers, drawn largely from up to 100 nominees proposed by each institutions as well as QS lists. | Research (30%):
|
Faculty-student Ratio (20%) – Based on student to faculty ratio, including all professors, lecturers, tutors, research fellows, postdoctoral researchers, etc. | Citations (30%) – Citations over six years of papers five years of papers published, normalised for subject areas and adjusted for countries. |
Citations per faculty member (20%) – Total citations in Elsevier’s Scopus database over six years of papers published by an institution over a five-year period normalised for the number of faculty members and academic fields. | International outlook (7.5%):
|
International faculty ratio (5%) | Industry income (2.5%) – Adjusted for PPP, scaled for faculty size. |
International student ratio (5%) |
Achal Mehra is a visiting professor of humanities at IIT Gandhinagar. The views expressed here are his own and don’t reflect those of IIT Gandhinagar.