University Rankings and How They Affect Academic Responsibility in the Globalised Era

University rankings not only hold higher educational institutes accountable for how they utilise funding and academic autonomy, but also empower prospective students to make informed choices and thus, force HEIs to be more competitive.

Listen to this article:

Debates about academic accountability – especially in the rankings season – mushroom in both the global North and South.  Academic accountability is, like public accountability, integral to the functioning, policy-making and perceptions of Higher Educational Institutions [HEI] by the taxpayer. Given the comfortable service conditions in most HEIs, the crisis of accountability where the public and the regulatory bodies want to know what exactly the institution is up to, especially when it slips in rankings, is but a natural consequence. 

Undoubtedly, rankings (which originated as survey-based methodological studies of HEIs in the late 1950s and early 1960s) are a spectre that haunt HEIs. As indices of performance, rankings have been criticised for their numerous biases (towards the sciences, for instance) and for driving HEIs into a spiral of competition – Chinese universities seeking to hire Nobel laureates to bolster their rankings being a case in point. Yet, they are embraced, reluctantly, as a form of peer review. 

But there are other sides to the rankings debate as well.

Also read: With Academic Freedom, Comes Academic Responsibility…Or Does It?

Rankings as “global governance”

Rankings and ratings have come to play an important role in assessing the provision of state services and the performance of elected officials, local governments, bureaucrats and governing institutions,” writes Alexander Cooley in his introduction to 2015’s Ranking the World: Grading States as a Tool of Global Governance

Whether it is the index for corruption, academic freedom, the ease of doing business, transparency or any of the numerous United Nations indices (in 2013, according to one study cited by Cooley in his book, there were 95 such rankings), rankings have become a policy tool for international bodies. Note, for instance, indices on climate change and desertification, among others.

Normative criteria for conditions like “corrupt” states are the consequence of these ranking mechanisms which then drive policy. Even when statistics are notoriously unreliable, – for example, in totalitarian states or states reluctant to document key parameters of poverty, gender equality, employment – the international organisations’ rankings become crucial factors in how the world perceives that state and its people. 

Complex structures like the social order, cultural practices – whether this is of “baksheesh” which the colonising Europeans were shocked by, or the ambiguous role of “influence” – are not easily reducible to quantifiable data. 

Standardisation, classification, and regimentation are administrative practices that seek to reduce complex practices into manageable numbers, maps and tables, as political scientist-anthropologist James Scott argued in his 1999 book, Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed

University rankings are a subset of a new form of global governance. Rankings, despite their best attempts to “localise”, often work as public accounting systems that reduce complex phenomena (such as the vision of an HEI that may be embedded in the needs of the locality) to a larger quantification system.  

Ranking and consumer culture; what do students want?

Commentators have argued that students are becoming increasingly like any other consumer group; they wish to know what exactly they can expect for the fees they pay and the time they spend in any HEI. While debates rage about the pros and cons of ranking, no country now can afford to ignore the system any more, whether they have accepted global ranking metrics like the Quacquarelli Symonds (QS) or Times Higher Education (THE) rankings or developed their own internally for this very reason.

While HEIs do advertise – some very aggressively – their “products”, the assessment of the skills imparted and developed in their “clients” (the students) is not always easily available as data. 

Ellen Hazelkorn, director and dean of the Faculty of Applied Arts, Dublin Institute of Technology, notes that rankings began as a consumer information tool. They were meant to provide potential students and parents with the data necessary to compare institutions (which is not easily obtainable from universities themselves) and make a choice. 

A largely unanswered query from students is: which HEI is best suited for my requirements? If we shift the debate away from “the best university/college” to “the university/college which best fits my requirements to study X or Y discipline,” we have moved away from an abstract rating of “good places to study” to “good places for me to study X”. 

This tempers the branding model of ranking large institutions by offering a narrow-spectrum view of discipline-wise choices for the consumer-student. Subject rankings have gained ground for precisely this reason. 

While this may seem an anathema to old-school views which do not wish to equate education with other consumables, highly respected commentators such as Philip Altbach (founding director of the Centre for International Higher Education at Boston College), Jamil Salmi (tertiary education coordinator at the World Bank), Hazelkorn and others have argued that rankings serve a useful role because they highlight the key aspects of academic achievement of any HEI and should thus be seen as empowering students.

Also read: Sheila Bhalla: A Committed Scholar and Activist

Like any consumer company and brand, the HEIs now are forced to prove their worth by competing with other such companies and brands and ranking is a manifestation of this competitiveness. 

For the government and regulators, the rankings provide an index of how (a) funding and (b) academic autonomy have been utilised by the HEI, with disastrous consequences, including the naming-and-shaming of those HEIs that underperform despite (a) and (b). These bodies wish to know which of the universities that they fund contribute, in real terms, to innovation, amelioration of social conditions and economic growth. 

Indeed, if we extend the consumer rights model, publishing rankings could even be accompanied with publishing the output of all faculty in an HEI so that the public knows exactly what any single faculty has done (or not done) in a year. The taxpayer can then assess what professors with large pay packets and extended vacations contribute to the intellectual climate of the nation by way of publications, policy, applied research, skill-development, and others. 

As for students, a detailed account of what they expect from HEIs has been developed by the Organization for Economic Cooperation and Development (OECD) and may be found in Volume 3 of its massive Assessment of Higher Education Learning Outcomes Feasibility Report (AHELO) of 2013 (Annexure F, page 100). 

This AHELO report includes responses such as “hold professors accountable”, “internationally competitive education”, “benchmarks with other students/institutions”, “I want to know how competitive I am on a global scale”, “help/validate the worth of my degree”, etc.  

Interestingly, alert to faculty disputes ( the “contest of the faculties”), one fascinating student response in the document reads: “Constructive competition between faculties”. One assumes this translates as: “if Professor X has published in a journal with an impact factor of 2, Professor Y will seek to publish in a journal with Impact Factor 2.25 at the very least”, instead of making the claim that “my essay, with numerous errors of fact, language and argument, published by paying a lot of money to a predatory journal, is as good as yours, which went through 3 revisions and reviews to finally appear in a journal indexed in the subject’s definitive indices.”

One notes that the AHELO report, while making new, broader parameters, does not shy away from acknowledging the inevitability of competition (for funds, quality and outcomes) in student evaluations of faculty, institutions and policies.  

Ranking and self-assessment

Not satisfied with the pressure brought on by the rankings system, the American Association of State Colleges and Universities (AASCU) and the Association of Public and Land-grant Universities (APLU) initiated a Voluntary System of Accountability (VSA). The VSA was also the result of pressure on the HEIs to assess student outcomes.

The VSA generates data on admissions/enrolment, financial support to students, completion rates (degrees awarded) and discipline-wise stats for an HEI to assess its performance. When (or if) implemented rigorously and honestly, the VSA functions as a key measure of accountability, being an academic audit of the HEI’s policies, pedagogies and power. 

Self-assessment exercises, as we know, are subject to internal pressures to modify the parameters to suit specific individuals, disciplines and processes. The necessity of external monitoring and intervention stemmed from this recognition of not-very-clear self-assessment exercises (which are almost like, “If I don’t figure in your ranking, I shall design my own and rank myself high”).

This runs into a double bind because it refuses comparative scrutiny, leaves the consumer unable to assess qualities, outcomes and privileges of different institutions and caters to an extremely narrow, narcissistic mode of assessing oneself.

The OECD also proposes a model of self-assessment that pays heed to more basic concerns of an HEI in its AHELO model. This model focuses on more layers of skill and outcome assessments. 

Starting with generic skills, it moves on to a more intensive focus on discipline-specific skills. It is also more attuned to contexts, such as total enrolment, male-female ratio, educational practices and quality, including student-faculty interaction and level of academic rigour but also “psycho-social and cultural attributes”. These last two include society’s expectations of HEIs and students’ career expectations. 

Finally, there is the “value-added component” of higher education. Ben Wildavsky describes AHELO’s emphasis on value-addition as follows:

“When a top student enters a university and exits with similar levels of accomplishment, how much has the institution really done with the ‘raw material’ that walked through its doors? By contrast, when a student enters with a B average and leaves campus with an A average, a case can be made that the university has performed a more valuable pedagogical role.”

The focus is clearly on transformative, on-the-ground skill development beyond the immediate goal of imparting generic knowledge. HEIs need to ask this very question: are we imparting generic, disciplinary or value-added knowledge?

From ranked HEIs to flagship HEIs

The World Class Universities (WCU) tag sits heavily on those who have acquired it and seems an improbable goal to reach for those who haven’t. Linked to internationalisation and its less polite, cruder version, globalisation, the WCU has reigned supreme as an aspirational model for many since at least the late 1980s.

Also read: In Pursuit of ‘World Class Universities’

Perhaps as a response to the criticism that such WCUs often renege on local and national concerns and requirements, commentators have proposed a new model: Flagship Universities. 

John Aubrey Douglas in The New Flagship University, calls for greater attention to national requirements and needs. He writes:

“The network of universities that are truly leaders in their national systems of higher education is to more overtly shape and articulate their own missions, build their internal processes aimed toward excellence in all of their endeavours and, ultimately, to meaningfully increase their role in the societies that gave them life and purpose.“

He identifies four characteristics of such Flagship Universities:

“..the expanse of programs and activities related to their “core” mission of teaching and learning and research [mission differentiation, expansion of programs]; old and new notions of public service and approaches to regional and national economic development; and governance, management, and internally derived accountability practices that form a foundation for the New Flagship model.”

Douglass does not quite see how the competition for bettering one’s quality through a comparison with the best in the field could also translate into better services for the students entering the HEI. This is where even the Flagship model needs some reworking, for which some clues may be found in the National Education Policy (NEP).

Also read: The National Education Policy Has a Grand Vision but Can’t See Its Own Feet

The NEP in its very second line notes that “quality education” would enable us to occupy the “global stage” and cites the United Nations’ – not local – Sustainable Development Goals (SDG) as a point of departure to “reconfigure” the education system to enable us to meet these goals.

Later, the NEP also calls for attention to “local and global needs of the country”. It seeks the making of a “truly global citizen” through “Global Citizenship Education”. At one point, in its section on school education, it calls for the use of “global best practices for standard setting”. 

For multidisciplinary education and research universities (MERU), the NEP calls for “the highest global standards”. In its recognition of the need to give a global competitive edge to the students, the policy (which is not without its problems and critiques) offers us a way of thinking about Flagship Universities as well. 

Elsewhere, for Institutions of Eminence, the government has mandated that such HEIs “shall maintain a record of research publications at the mean rate of at least one per faculty member each year in reputed peer-reviewed international journals, based on publication made by [the] top 100 global universities in these journals.” This clearly indicates that benchmarking has to be done not with peers within the locality, region or nation, but with the global best.

Flagship universities that merely seek to look inwards for goals and processes are, in all likelihood, working solely with local pressures and outcomes and do not provide their students with a competitive edge in the global playing field. To look beyond one’s borders augurs well because comparative scrutiny can enable modifications that are essential for the stakeholders to find employment, funding and opportunities in any part of the world. 

A basic fact that should be a part of the debate here is that disciplines – except for local language ones, perhaps – are global in nature. Theory, experimental data, literature on the subject are all transnational in origins, deployment, critique and knowledge-production. Even those who wish to stay local in these disciplines fall back on theories that emanate from the global market of ideas, even as they argue against “theory” as a “Western monster”. Those who resist theory, as John Kenneth Galbraith once said, are simply in the grips of an older theory. 

Thus, to deny the student teaching, research and quality inputs which are competitively arrived at is to deny them a world. 

Flagship Universities, if they develop the four characteristics in comparison with the world’s best, are undoubtedly going to serve the national needs better. In other words, since no nation is, metaphorically speaking, an island, no student should be limited to an island existence.

Pramod K. Nayar teaches at the University of Hyderabad.