Government's Ranking Framework for Higher Education Is Biased Towards Larger Institutions

There is a critical need for revision of the current methodology of the National Institutional Ranking Framework that ranks better funded higher education institutions above the high performing ones just because they are small.

If the NIRF ranking methodology is used for determining funding of higher education institutions in future it would lead to an inefficient and iniquitous distribution of resources. Credit: Wikimedia Commons

Institutes of higher education in India – specifically colleges and universities – have recently been under scrutiny for their quality of teaching, research and policy impacts. The first government effort at quality assessment in the education sector at the national level was initiated with the creation of the National Assessment and Accreditation Council (NAAC – for colleges and universities) and the National Board of Accreditation (NBA – for technical and professional institutions) in 1994.

The NAAC currently uses seven criteria to evaluate and grade higher education institutions. While there were some non-government efforts like India Today and The Week’s ranking of colleges and universities, none have enjoyed the near instant fame of the Ministry of Human Resource Development (MHRD) initiative – National Institutional Ranking Framework (NIRF) started in 2016.

The reasons for NIRF’s wide acceptance are two-fold. First, it is being done directly by the MHRD – so public higher education institutions can ill-afford not to participate as their financial grants may get linked to this exercise and for the private ones, this is a good occasion to showcase themselves and attract students in order to be financially viable. Second, it is the most transparent ranking of higher education institutions being undertaken in India currently. The NIRF uses a five-criteria mechanism as opposed to NAAC’s seven, but shares a fair amount of common ground.

The NAAC’s methodological problem is that there is no uniform evaluation – even though the parameters are uniform, results have been unpredictable because there is a different visiting team to each institution leading to much subjectivity and criticism. Media reports suggest that there is some re-thinking going on in NAAC and there may be an overhaul in the NAAC’s approach in the near future. With the advent of the NIRF, of course, both NAAC and NBA may have to recreate themselves or find themselves relegated to history.

The NIRF 2017 has made it mandatory for all participating institutions to upload the data (submitted to the NIRF) on their websites besides making the same available on NIRF’s website in summary form. This transparency has added to NIRF’s credibility. Most of the information is in numbers that allows easy computation once a methodology has been chosen. In 2017, we have seen an improvement in methods over the 2016 exercise – a good sign as there seems to be a learning process in the system.

NIRF 2017, however, still suffers from a methodological bias resulting in larger higher education institutions occupying more positions in the top 100. In this group, the average faculty size is 606 with a minimum of 38 (Jawaharlal Nehru Centre for Advanced Scientific Research, rank four) and a maximum of 2,893 (S.R.M Institute of Science and Technology, rank 34). A simple plotting of NIRF ranks (top 100 higher education institutions) against the annual expenditure by institutions (log value) shows a clear downward trend – better funded institutions have better ranks. If we view this in statistical terms, there is a significant correlation (0.5) between the rank of a higher education institutions and their annual expenditure.

Plot of NIRF rank with respect to their total annual expenditure (log value)

One of the five criteria used by NIRF 2017 for rank calculation is “perception” by peers, with a weight of 10%. It is possible that the size of the institution influences ranking outcomes through perception, apart from the other four criteria used by NIRF. A bigger institution will have higher student and faculty strength that generates more aggregate research. Therefore, by rule of probability, the larger institutions are more “visible” but not necessarily efficient academic performers.

If this flaw in the methodology continues, small-sized higher education institutions will have very little chance of being in the top 100 – not because they are laggards but for their size. This is evidenced by the fact that only eight higher education institutions with less than 150 faculty members and nine higher education institutions with less than 300 students feature in the top 100 NIRF 2017 ranks.

If we use a per capita framework of a measurable academic output, say research, the rankings would change quite significantly. It is acceptable to argue that research is not the thing that should be considered for academic review. However, it is the only third party sourced data we could rely on for ranking – everything else being self-reported by the institution (except perception).

The average per teacher publication (publication rate) is a widely-used measure of productivity. If it is used as a ranking measure, the smaller higher education institutions would find a chance of locating themselves in a right position while comparing with the large higher education institutions as is evidenced by the downward sloping curve of rank difference when mapped against size of faculty (see Graph 2). By doing so, among the top ten institutions, JNCASR would displace Indian Institute of Science from number one and JNU would drop to a rank of 34. And now there would be only six higher education institutions in the top 30 (by publication rate rank) that have a faculty size above the national average of 606.

Difference in rank plotted with the size of the faculty (on the horizontal axis)

The graph below plots observations with the size of the faculty (on the horizontal axis) and the difference in rank (between ranks of NIRF 2017 and ranks obtained from average publication rate per teacher). Those above the horizontal line are the gainers in rank. The blue vertical line separates the higher education institutions that are below the average faculty size of 606 from their bigger sisters. The downward sloping fitted curve suggests the rank improvement that smaller higher education institutions would make if NIRF uses per capita verifiable performance rather than the current methodology.

If the NIRF ranking methodology is used for determining funding of higher education institutions in future it would lead to an inefficient and iniquitous distribution of resources. It would make richer institutions more rich and the poor more poor and in the process make more productive higher education institutions lose out to the inefficient ones. This calls for critical evaluation of the NIRF methodology and the need for a revision to remove the current bias that ranks better funded higher education institutions high and penalise high performing ones just because they are small.

P. Mukhopadhyay, P.K. Sudarsan and M.P. Tapaswi are with Goa University.