Education

The UGC’s Idea of Measuring Faculty Productivity Isn’t Nuanced Enough

In a system that awards specific points to some research activities, it is not the value of that research activity that is awarded points but the value addition to the discourse that makes it worthy of recognition.

A classroom in Durgapur. Credit: shankaronline/Flickr, CC BY 2.0

A classroom in Durgapur. Credit: shankaronline/Flickr, CC BY 2.0

There is no generally accepted definition of faculty productivity. Defining it as a number of classes or courses taught, number of credit hours generated or number of students taught is really defining teaching workload, which some equate with faculty productivity. Katrina A. Meyer, affiliated with the University of Memphis, Tennessee, had argued in 1998 that “workload traditionally captures how time is spent, while productivity is a measure of what is produced with that time.” Though there are contentious positions on the accepted standard of measuring faculty productivity in the higher education context, the prevalent method is quantitative – wherein productivity is the ratio of output to input, with output being the number of units produced. Output is measured by graduation rates, number of papers published, PhD degrees awarded and research projects completed. Inputs are, usually, the cost of university education for a given time period.

For some scholars, there are one or two features that effectively measure faculty productivity. For Robert Blackburn, of the University of Michigan, Ann Arbor, a faculty member’s level of motivation to teach, undertake research and service was a key facet. Blackburn however did not devise a method to calculate the level of motivation. For some others, supervision of doctoral students determined faculty productivity and another set of academicians contended that the number of publications reflected faculty productivity effectively.

Another group, closer to the contemporary understanding of faculty productivity measurement lists out a broader criteria on the basis of which a faculty member can be assessed. A series of individual and institutional attributes are ascertained to evaluate productivity along with a ratio of outputs to inputs, or benefits to costs. For Mary Frank Fox, of the Georgia Institute of Technology, there were a range of factors that were integral to calculating productivity: age and gender; rank, years in higher education, quality of graduate training, hours spent on research each week and extramural funds received; and institutional characteristics.

However, the most significant contribution to this discourse has come from James Fairweather and Andrea Beach, of the Western Michigan University, who point at the futility of portraying an average research university because of the variances across disciplines. With such multi-faceted approaches that examine different expectations, tasks and cultures, it is recommended by scholars that a broader evaluation standard be adopted.

In India, the University Grants Commission (UGC) sought to usher in institutional reform as an organ of the Ministry of Human Resource Development. The underlying objective was to regulate promotions of incumbent teachers as well as the recruitment of new teachers based on the academic performance indicators (API) under the Performance Based Appraisal System (PBAS). Introduced in 2010 and subsequently amended in 2013, the UGC has compartmentalised academic output into three categories for calculating API. According to the gazette notification, the first category focuses on teaching – learning, and evaluation related activities, the second category includes academic administration and co-curricular activities and the third category covers research output. These categories facilitate the process of recruitment and promotion of faculty under the Career Advancement Scheme (CAS). In addition, there is a structured point-based system that, in a sense, ranks faculty members to ensure systemised promotions.

Indian academicians have critiqued the PBAS as an overbearing evaluation scheme that interferes with the autonomy an academician ought to enjoy, with a potential to discourage long-term engagement in fundamental and path breaking research. With the advent of API, the UGC sought to streamline and standardise the myriad objectives and missions of different universities. Critiques have also noted that although API-PBAS factors in differences in academic disciplines, it is inadequate as it does not capture the asymmetry in publication opportunities for journals in different fields.

Moreover, PBAS fails to take cognisance of different research and academic pursuits undertaken by faculty members that may not be in line with their standards but still contribute substantially to the existing discourse. The broader and general discontent with PBAS has been that it assumes a linear relationship between time spent and research output. Terming quantified activities as only those generating knowledge on a subjective basis is simplistic and demeaning to other original, critical and potential activities that further the existing academic discourse.

“The introduction of academic performance indicators (API) by the University Grants Commission (UGC), lack of clarity in identifying and evaluating journals, the focus on ‘quantity’ over ‘quality’, unhealthy competition between peers, and overall, a favourable non-scientific publishing environment have led Indian researchers to publish in mediocre journals wherein most manuscripts are published without any peer review. Perhaps it is also the fear of peer review that has nourished predatory journals, making India one of the world’s largest base for predatory open-access publishing,” notes a September 2014 editorial in the journal Current Science.

According to Jeffrey Beall, a librarian at the University of Colorado, Denver, the number of predatory publishers has risen from 18 in 2011 to nearly 700 in 2015 and the number of standalone fake journals has shot up from 126 in 2013 to 507 in 2015.

Assessing a scholar almost solely based on her publications in prestigious journals is arguably flawed. These days, faculty members choose not to shape public debates or policies through publications but by alternate mediums of communication. Access to these journals is prohibitively expensive for practitioners and the sheer volume and incomprehensible jargon further prevents lay persons from reading them. While these prestigious journals identify themselves as exclusive, they do little to further debates or resolve problems.

This is not to say that we do not require purely theoretical discussions, for which popular media might not offer a platform. But on the other hand, it is imperative that the measuring standards account for the scholar’s presence in popular media. Access to televised debates or brief articles in newspapers has become simple in this era of information technology. Scholars also prefer these mediums to express views and in the recent times, interaction amongst reputed experts and individuals has increased tremendously.

Responding to another set of criticisms, it is significant to bring factors like course design, class preparation, devising assessments, conducting and assessing examinations and other such activities undertaken by faculty members beyond the classroom, within the scope of measurement. These factors are inherent to a faculty member’s professional requirements. In a sense, they also demonstrate the concerned faculty member’s motivation towards the area. Therefore, one way to respond to scholars, who advocate for a more holistic measuring mechanism which involve evaluation of qualitative characteristics, is by calculating time spent whilst undertaking these activities.

Creative endeavours undertaken by faculty members, or engagement with the subject material in a critical manner or other such diverse interests cannot be easily quantified. However, in a system that awards specific points to some research activities, it is not the value of that research activity that is awarded points, but the value addition to the discourse that makes it worthy of recognition. Similarly, measuring systems have to include, say, the impact on the discourse if faculty members and invite reputed scholars in a particular field of study to engage with students in a seminar.

While, on one hand, the argument that such tasks are hard to quantify can be accepted, if a system is introduced specifically to measure faculty productivity, then it must be fine-tuned to an extent that it acknowledges the varied nuances that flow from diverse faculty research interests.

Arjun Joshi is a final year B.A. LL.B. student of Jindal Global Law School. He would like to thank Prof. Anamika Srivastava for her support.