Listen to this article:
There is a lot of talk about academic freedom. The index which measures this – oddly, those ardently opposed to any index or metrics cite this one merrily – is believed to demonstrate the incremental loss of academic freedom. Faculty, students, researchers are up in arms against states trying to determine what may be taught, researched and published. Reports of faculty being harassed for holding views deemed unacceptable by the ruling powers are distressing at the very least, and drive the quest for academic freedom: the freedom to think, teach and write.
All of this is, of course, necessary, especially if we were to look at its obverse: academic accountability. Because with freedom comes accountability and those – such as academics – who holler themselves hoarse seeking accountability from everyone and anyone cannot refuse to have a similar accountability requirement placed upon them. In the words of Lawrence Martin, anthropologist and owner of Academic Analytics,
“The research mission of universities should not be exempt from assessment and accountability because it is both expensive and vital to national competitiveness.”
Accountability is the obligation to report how resources have been spent, to what end, without their misuse in predatory and illicit ways. Many accountability regimes are determined by national contexts: so it could be a national quality assessment mode in the Netherlands as opposed to the more variety in procedures in the US, noted Jeroen Huisman and Jan Currie in an early essay in Higher Education.
Roger King writing about accountability in Higher Educational Institutions (HEIs), while admitting that we all now reside in an ‘evaluative state’, identifies various types of accountability principles and processes: involving the market, the state and professional self-governing forms. If we were to elaborate these we can layer the accountability levels as follows: accountability to the state (which includes the funding and regulatory bodies), the peer-group (professional colleagues and the academic profession at large), the public (which consists of the tax-payers from whom funding for HEIs originate, as well as the students) and finally the market or industry. Accountability may be financial, social or academic, and frequently all three.
The state seeks accountability in terms of compliance with its rules and even norms – and this is the most commonly employed mode of ensuring accountability. But there are other ways too through which accountability is translated into a mode of action. As studies show, the most common mode of assessing accountability is performance measurement.
Performance measurement as accountability
In the evaluative state an assessment of performance is measured across parameters as diverse as inclusivity and diversity in enrolment, publications, funds generated, employment and collaborations. Harvard scholar Robert Behn writing in a provocatively titled piece ‘Why Measure Performance?’ cited Joseph Wholey and Kathryn Newcomer:
the current focus on performance measurement at all levels of government and in nonprofit organizations reflects citizen demands for evidence of program effectiveness that have been made around the world.
Program effectiveness, control of the program via funding and hiring policies are not the demand of the state alone but of the public from which (theoretically) the authority of the state emanates.
Performance measurement is managerial and standards of measurement depend on what the management is looking for (Behn lists the following purposes: evaluate, control, budget, motivate, promote, celebrate, learn, and improve). Thus before measuring the performance of any organisation, the manager needs to know what the organisation is supposed to achieve. That is, unless the state and its regulatory bodies determine a clear mission, vision and strategies directed at a set of goals, to measure the performance towards those goals would be impossible.
For HEIs, then, should the focus be on inclusive and mass education, foundational research, application-driven research, teaching, or industry-oriented training? Should we direct our energies towards specific domains of study, employability or critical thinking? Would the abstract “pursuit of knowledge” be a better option than the pursuit of manufacturing a product? In which areas should performance be optimum/high/negligible? What parameters of performance should be rewarded, and others overlooked? Which one would be an index of “good performance”: the production of a large number of degrees or the production of high-end research? (and are these exclusive to each other?)
Until such time as we have determined areas of focus and areas of interest – and they are not interchangeable – setting up performance indicators would be untenable. When policy documents issued by nations worldwide try to seek a balance between different goals, the measurement of performance in the pursuit of those goals becomes difficult, even messy. Then again, the funding agencies and the state periodically revise their agenda for HEIs, as commentators have shown, and this makes setting long-term goals a dicey proposition because, in a few years, the state will insist on performance in another set of domains!
To convince the state and its tax-paying public that the HEI is doing a good job requires the codification of performance, notes Behn, and hence the immersion in metrics and standards set by different ranking or rating agencies. With an overwhelming emphasis on performance accountability, when any funding or privileges are bestowed by the regulatory bodies or the state, they come with special tags. Roger King puts it this way:
In return for public funding and delegated freedoms, in some jurisdictions universities are required to promote social and ‘fair’ access in their admissions. More broadly, they are accountable and funded according to performance outputs and other evaluations.
King makes a valid point: performance outputs do determine funding everywhere, and so it should be. Rankings make this performance visible, and embarrass or annoy those who do not figure in them, leading to protests about the ‘black box’ of parameters, the neglect of other parameters, etc.
Peer and public accountability
More iffy is the accountability to the peer group and the regulatory systems of the HEI itself. It is often argued that, rather than state regulation, the profession must regulate itself, proscribing fraud, negligence and malpractices. That is, it is the HEI and the profession that need to prescribe norms of acceptable behaviour.
How accountable is a researcher, teacher, faculty to her or his profession, its requirements? To this question, no one, not even the best commentators and organizations such as the American Association of University Professors (AAUP) have clear answers. But what they all agree to is: no faculty or researcher is a totally free agent, free to slander, misrepresent, bully as demagogues, cheat the stakeholders or the profession itself. As the AAUP’s oft-cited statement puts it, ‘scholars and educational officers … should remember that the public may judge their profession and their institution by their utterances’.
This means, any HEI, regulatory body, a member of the profession or the public can at any point demand accountability for the funding received as salary, project grants and other incentives by any faculty or researcher. As AAUP and other commentators have repeatedly endorsed, no faculty has the right to (a) deny scrutiny of her or his work in the name of academic freedom, (b) make claims on behalf of the institution unless authorised, (c) seek to undermine institutional regulations, (d) coax in unacceptable ways, students and listeners into a particular way of thinking. Why, in other words, would you be afraid to have your work, whether classroom teaching, publications or assessment modes, scrutinised and assessed, for fraud, plagiarism, and quality? (In the COVID-era, contrary to the face-to-face classroom environment, the online mode records everything in the class – there is now evidence to measure this performance?)
Publication in a journal is one of the oldest methods of disseminating knowledge among peers, and remains the cornerstone of output measurement even today. Faculty in research universities are expected to have an annual output of one publication in an indexed journal (indexed in curated abstracting and indexing databases) annually – and this is measured against the number of lecture hours per faculty, salary from state or other funds in order to ascertain the “costs” of research and teaching. It is because of the low output from teachers despite the national investment in them that proposals to divide the teaching community into “teachers only” and “researchers only” are mooted occasionally.
Whether this assessment and scrutiny of output should be quantitative (machine-and-metric driven) or qualitative is an altogether different issue, as nations over the world have discovered, but no academic system can be devoid of a rigorous scrutiny process of the output precisely for the reasons outlined: the output is accountable for the funds and privileges that have made the output possible. Research that is manipulated in terms of data, plagiarised or misrepresented then would be a clear violation of the principles of academic accountability. (India has the largest number of academics who publish in predatory journals – we reached this milestone 3 years ago.)
Accountability here is the indispensable openness to scrutiny and examination by the public and/or its approved bodies and authorities – an openness academics demand of everyone else. Whether this necessary openness implies a willingness to be ranked in competition for funding and other opportunities, with other HEIs, is a further point in the debate.
Hierarchizing performance accountability
That said, the performance accountability regime, as argued elsewhere, needs to begin at the very top, with Selection Committee members, senior professors, Vice Chancellors, Deans and Heads, so that the greater the salary, benefits and other privileges one accrues, the greater should be the performance accountability: so what research output has come from our professors? (And to say, as Indian academics do, that “I have served on 72 Committees” is perhaps not a measure of academic performance but rather of an amenable and easily persuaded nature!)
The competition between HEIs produces its own dynamics of accountability as every HEI, seeking high-enrolment, international collaboration, global reach, rankings, funds from assorted agencies, publication and patent triumphs, seeks to better itself in relation to other HEIs. This is a hierarchisation of performance accountability.
Often benchmarking performance through systems like the QS, NIRF, NAAC and other systems, HEIs now function like any organisation when faced with competitors, whether they choose to or not. In many cases, with the mushrooming of HEIs across the nation, enrolment in certain disciplines falls, and this raises debates about disciplinary viability, excess faculty strength and others. Statistics of enrolment decline have caused major universities and colleges in the US, for instance, to worry about income and continuing funding.
Financial and academic accountability when placed side by side sometimes runs into choppy waters. Studies such as those by Academic Analytics have found that “the top 20 percent of faculty produce over half of the scholarly output, while the bottom 20 percent contribute virtually nothing” (Lawrence Martin in Accountability in American Higher Education, 2010). Under such circumstances, would financial accountability for salaries, hierarchically set, be aligned with academic accountability for incentives to the top 20%? Is such a hierarchisation based on one performance parameter an index of the destruction of the academic idyll?
In order to fit in with the performance accountability regime, universities stick to the established routine (to return to King again). Hence they are averse to risk-taking, trying new areas of inquiry in which performance and results may not be immediately forthcoming. This hampers frontier work, risky experimental pedagogies and research, or “enterprise and innovation”, as King puts it. How does any HEI or its regulatory bodies evolve performance accountability measures for frontier fields and innovation? With rankings now devoted to innovation even in India, and the making of Innovation Councils in HEIs, the bestowing of a degree of autonomy to select institutions in the form of Graded Autonomy, Institutions of National Importance and the Institution of Eminence status, there has been a shift towards enabling HEIs to explore change (albeit, it must be said that the work culture is so happy with the status quo that change is actively resisted, except the annual change in the form of increments).
Accountability, transparency, datafication
Some universities have listed their accountability principles as public documents, from their spending to enrolment, performance evaluations, career outcomes of their graduates after obtaining degrees, evaluation of new academic programs and academic units. These are attempts to be responsible for the funds they receive and the trust their patrons place in them. The New Education Policy 2020 invokes “accountability” as a key feature, going so far as to state
“excellence will be further incentivized through appropriate rewards, promotions, recognitions and movement into institutional leadership. Faculty not delivering on basic norms will be held accountable.”
The University Social Responsibility Network (USRN) is another instance of the global emphasis on academic accountability. Several HEIs have adopted, as a consequence, the UN’s SDGs as a part of their ‘challenge-led’ research so as to demonstrate their responsibility and accountability for the funding and privileges they receive.
Admittedly, identifying performance, or even malpractice, and making them visible is ignominious and damages the institution and the individual. However, in the age of the evaluative state, all forms of predation – sexual, financial, social – are made public as a naming-and-shaming mode, but also as a part of the drive towards a more accountable society and as a cautionary tale.
Demands for such naming-and-shaming have rightly come from academic activists across the world: but then, what stops HEIs from publicising academic predations? Should the stakeholders, including funders and the state, not be made aware of the predations – from malpractices to harassment – by the salaried faculty?
Moving towards accountability then implies a shift towards greater transparency as a response to the generic question: what do faculty do? That said, the accumulation of data of various performance matrices, including predations, have made HEIs into datafied organisations as never before.
Performance accountability that have devolved into datafication systems, is an attempt to erase human intervention. Such a move towards datafication however leads one to assume that we now possess a clear insight into otherwise immeasurable qualities. In the opening pages of Technologies of Speculation, Sun-Ha Hong writes
“The moral and political question, then, is not simply whether datafication delivers better knowledge but how it transforms what counts in our society: what counts for one’s guilt and innocence, as grounds for suspicion and surveillance, as standards for health and happiness.”
The above comment applies to performance accountability regimes as well. While we acknowledge that accountability cannot be done away with – and no HEI member can refuse scrutiny – what counts as goals for performance and the data collected therein must be first thrashed out. Until then, well, there is Retraction Watch, iThenticate and peer-review.
Except for those who declare they are peerless.
Pramod K. Nayar teaches at the University of Hyderabad.