There is a variety of ranking organisations – QS, THE, NIRF, NAAC, IoE, ARWU, etc. – in almost every conceivable acronym, compiling gigabytes of data to determine where universities stand, sit or crawl, and it is mind-boggling. The ranking parameters and criteria, including undergraduate teaching (especially in the Times Higher Education list), research (volume, citation index, income), teaching, employability, industry income (knowledge transfer) and internationalisation have been endlessly debated. So what roles do the modalities of evaluation and classification play in shaping the way we have started to think about these institutions?
This is not a plea for a variant of jingoistic nationalism in thinking about higher educator, rather an evaluation of what we do, are required to do, as we rush into the headlong halls of globalised ranking. Another question this shift into the rankings mode begs is: how best can we recalibrate ourselves to alleviate the tensions between a quantitative ranking system and the social needs higher education has been designed for until now? Can the former enable the latter?
Utility and the higher-education hierarchy
Ranking introduces a set of notions about the utility of a higher education institution. One can think of this as introducing a tension between the ideal of educational values, as we have understood if not articulated for years, and the market value. The former focuses on critical thinking, analytical abilities, social agendas and the inculcation of citizenship ideals that are unquantifiable and intangible because they manifest in our primary beneficiaries (or victims, depending on how we see higher education) in the long-term. The market value scheme, which is industry-driven, orients the project of neoliberal higher education training towards developing particular skill-sets for the labour market.
This is not to say that there are segments of a population that only want to contemplate the absolute, live on love and fresh air and not want jobs. But the expected skill-sets from a quality higher education programme, as it stands today, does not seek a unidimensional product. This is changing with the neoliberal turn in higher education. The noted scholar Henry Giroux has this to say about the ‘attack’ on public institutions:
What we are witnessing is an attack on universities not because they are failing, but because they are public. This is not just an attack on political liberty but also an attack on dissent, critical education, and any public institution that might exercise a democratising influence on the nation. In this case the autonomy of institutions such as higher education, particularly public institutions are threatened as much by state politics as by corporate interests. How else to explain in neoliberal societies such as the U.S., U.K. and India the massive defunding of public institutions of higher education, the raising of tuition for students, and the closing of areas of study that do not translate immediately into profits for the corporate sector.
Ranking systems ensure that, globally, all universities seek to fit into a single model of the university because all higher education institutions seek to gain in more or less the same set of parameters, irrespective of where they are located and the local cultures/societies they were set up to serve. This eventually leads to an alienation of the university from the immediate requirements of the locality, region and nation, as it strives to compete with very differently located (in terms of geography, demography, educational ecosystems) universities worldwide. If, for instance, a university was set up to provide greater access to higher education for a particular region and begins to shift its emphasis towards internationalisation and research (two key parameters in rankings), then does it serve its immediate populace better through quality classroom teaching? Would it then result in an alienation of our higher education institutions from our ecosystems because we are trying to fit into a global one?
Research and teaching, or research versus teaching
Greater emphasis is laid on publications and a concomitant emphasis, therefore, on research – but far less on teaching. Thus, one of the most widely used ranking systems, the QS World University Rankings, has only one indicator connected to teaching: the faculty-student ratio. To this it assigns 20% weightage. According to QS:
… teaching quality is typically cited by students as the metric of highest importance to them when comparing institutions using a ranking. It is notoriously difficult to measure, but we have determined that measuring teacher/student ratios is the most effective proxy metric for teaching quality.
A study published in March 2018 found the following:
A total of 24 ranking systems were identified and 13 eligible ranking systems were evaluated. Six of the 13 rankings are 100% focused on research performance. For those reporting weighting, 76% of the total ranks are attributed to research indicators, with 24% attributed to academic or teaching quality. Seven systems rely on reputation surveys and/or faculty and alumni awards.
Such weightage provided for research in most ranking mechanisms has resulted in the making of what Pushkar brilliantly described in an article for The Wire as ‘pretend research’. This results in, and in turn is driven by, the massification of publication. As an ironic consequence, India is finally an academic capital… for predatory journals. Seeking to boost rankings, universities emphasise – and perhaps fund (we need exact data on how funding for research has changed since the quite literal ‘ranking business’ began) – research rather than teaching. Weird results have also been reported – such as attempts to inflate citation (20% weightage in QS) through the unethical practice of excess self-citations – in the academic debate on rankings.
Eventually, unless teaching becomes central to evaluative and ranking processes, the basic work of most universities in India – teaching – will collapse if it hasn’t already. Teachers preparing for classes from Wikipedia (the chosen source for several colleagues in English is Spark Notes) is now a common feature, since student feedback on teaching quality is not factored into rankings or even for teacher-evaluation. When ‘publish or perish’ becomes the motto, we could perhaps ask if we publish perishable materials. (My senior colleague therefore asks that we distinguish between a ‘print-out’ and a ‘chapter’ in what faculty members write.)
Competition, standards and standardisation
An enhanced spirit of competition enters the system. Ranking introduces an element of competition between institutions, and parameters such as internationalisation imply that universities will have to compete for these resources. For example, as internationalisation is a key parameter attracting foreign students, it entails the development of programmes that will attract these students, ironically in a context where most higher education institutions have refused to upgrade their syllabi or pedagogies for Indian students. My own institution has sought, at least in principle, to strike a balance between the race for global ranking and our immediate mandate – good quality higher education for India – by thinking in terms of ‘national needs and global standards’. Nothing stops an institution from boosting its quality of teaching and research such that it impacts positively on our students’ futures.
Standards need not come from, or result in, standardisation. To adopt world-class standards within any domain of knowledge does not necessarily entail fitting into a global ranking mechanism. Updating and upgrading teaching materials, pedagogy and testing mechanisms, even research within the funding possibilities offered, can still be world class. Humanities and social sciences, deeply defensive in all evaluative mechanisms, are surely not quantifiable by the same indices (impact factor, H-index, etc.) but that does not ever mean that we cannot publish in the world’s top-ranked journals.
Even Indian faculty members, admittedly few in number, have done so in the past and continue to do so. These are aspirants to global standards but either do not homogenise or standardise their work. Seeking exclusion from rankings and global standards is to simply seek a state of (postcolonial) exception – although no one would refuse global funding for conference travel, fellowships or collaboration opportunities. For the latter, one doesn’t hear excuses that ‘we are different and need to be evaluated differently’, do we? What we have to do is to ask if standards equal standardisation, given the mandate of different universities across the country, but at no point is it wise to abandon any and all discussion of standards.
With ranking tied to ‘graded autonomy’ that the Indian state is now proposing for select institutions, new parameters come into play. The relative freedom the latter provides, at least in theory, can (or must?) be suitably leveraged to generate resources that will then subsidise a higher education institution’s social agenda and programs. Cross-subsidy is an established mode of operations. For example, global publishers make enough money from their dictionaries and school textbooks to fund their higher education publishing, which has far lower sales.
The way we can see these two – ranking and autonomy – is that we raise standards to global levels to attract high-paying international students, which in turn funds the ‘regular’ programmes of an institution – programmes that are running aground for lack of state-provided funds (and this is one example). A two-tier system, therefore, seems inevitable in the current context.
The prestige economy
The higher education institution is no longer part of a utilitarian economy. Rather, rankings have ensured that we move into a ‘prestige economy’ model. Controversial rankings methods such as those employed by IoE are primarily, if not entirely, about this economy, in which it is more important to attract better faculty and students and facilitate more funding and collaborations. The brand, as James Twitchell has argued, tells a story, and the successful brand is one that tells its story successfully. As opposed to a signature, a brand is iterable, repeating across forms and media. It is its instantaneous recognisability that constitutes a brand. If the ‘regime of value’ (John Frow’s apposite phrase) is one defined by ranking and global participation in it, then the task set out is for us to participate in order to leverage rankings for our needs.
An insertion into the prestige economy enables an indigenous institution to punctuate the global flow of cultural and symbolic capital. When high-ranked, prestigious institutions in India, for instance, attract financial and academic inflows, they feed directly into the global knowledge economy. The geography of prestige, for long located in first world institutions, has been altered with universities from Singapore, China and Hong Kong climbing the rankings. This does not mean national knowledge or cultural production is being devalued in favour of the global, rather it can be seen as a postcolonial moment if the formerly colonised interrupt global cultural hierarchies by appearing on these ranking lists.
It is the prestige economy that alters the demographics of incoming students, faculty and funding. To be associated with a high-ranked institution translates in most cases to improved employer perception for students, more visibility for faculty work, collaboration, funding, travel, among others. Ranking then cannot be dismissed as a mere number. Over time, it can bring in benefits for stakeholders as well. Elsewhere, Ellen Hazelkorn lists these benefits:
For students, they indicate the potential monetary or private benefits that university attainment might provide vis-à-vis future occupation and salary premium; for employers, they signal what can be expected from the graduates of a particular HEI; for government and policymakers they can suggest the level of quality and international standards, and their impact on national economic capacity and capability; and for HEIs they provide a means to benchmark their own performance. For the public, rankings provide valuable information about the performance and productivity of HEIs in a simple and easily understood way.
Both rankings and the newly proposed autonomy systems reorient higher education into accountability regimes that are ruthless, unrelenting and multilayered, a study published by the Centre for Global Higher Education at the UCL Institute of Education, London, in 2017 made clear. As the world clamours for greater transparency, accountability and return-on-investments, the public institution, for long never held to account – although its faculty has always asked everybody else for accountability – faces a frightening situation of having to fit into this new economic and accountability regime.
Unfortunately, this shift comes when funding has decreased. A public university is, in the last instance, accountable to its public – but that it is the state that creates these mechanisms of accountability generates the anxiety in these places. At the same time, who else would do it? Accountability regimes, like the prestige economy, are here to stay – as are rankings.
Pramod K. Nayar teaches at the University of Hyderabad.