A smartphone app used by over 60 million people. Drones in the sky tracking people’s movement and checking their temperature. Facial recognition cameras reporting to the police on whether someone has broken quarantine.
These are some of the ways in which Central and state governments have put technology at the forefront of the efforts against the COVID-19 pandemic in India. It may appear intuitive and appealing, at this time of crisis, to turn to technologies like the internet and artificial intelligence, which have been widely adopted and seen such tremendous social and financial investment in recent years. However, in the haste to deploy these digital solutions, there has been little introspection on implementing the legal and technical frameworks which can ensure that these technologies help, rather than hinder, public health and social trust.
The manner in which the Aarogya Setu app is being deployed is symptomatic of this lack of introspection. Aarogya Setu was designed as a ‘digital contact tracing’ app which can inform users whether they are at risk of COVID-19 infection, to help people self-quarantine and allow them to approach public health authorities. However, reports are emerging on a daily basis of how this app, which was intended to be ‘consensual’ and voluntary, is now being mandated by the Central government, for everyone from government employees, to delivery workers and construction workers.
Reports have also emerged of the arbitrary arrest and quarantine of a woman in Mumbai, allegedly based on information gathered from Aarogya Setu. The Central government also has plans to use the application to determine people’s mobility, by issuing ‘e-passes’ on the app, and the CISF has suggested making the app mandatory for travelling in public transport like the Delhi Metro.
This use of digital technology is disproportionately affecting poor and marginalised communities. In a country where an estimated 65% of the population does not have internet access, let alone a smartphone and constant power supply, making a smartphone app as the focal point to determine people’s livelihoods will leave out the millions who cannot rely on Internet connectivity or power access. Moreover, ‘social distancing’ is an impossibility for the millions who are dependent on daily wages for their livelihoods, and enforcing the same through surveillance and punitive measures like enforced quarantine, will likely compound their difficulties.
In the absence of safeguards, these technologies often make decisions which are both incorrect, and difficult to challenge or override. For example, if a person’s ‘health status’ is determined by Aarogya Setu instead of by clinical testing, a negative result can mistakenly subject people to limitations on their movement, possibly depriving them of daily wages, while leaving them with little prospects of understanding or overriding such a decision. This is apart from the fact that the technological claims of ‘digital contact tracing’ have been widely disputed the world over – countries with widespread smartphone adoption like Singapore and Taiwan have cautioned against relying heavily on digital contact tracing without backing it up by widespread testing and human contact tracing.
The same technologies being encouraged to aid in humanitarian efforts have historically been used by governments and corporations alike to aid undemocratic surveillance and control, in a manner which have left people with little control over their data and their lives in an increasingly ‘digital’ world. Incidents like the misuse of Aadhaar data by governments, or the misuse of Facebook data to influence voters have created a serious crisis of trust in digital technologies.
This crisis of trust pervades and hampers our current ‘technology-first’ efforts to mitigate the pandemic. Reopening a nationwide lockdown and resuming a semblance of social life requires widespread trust and cooperation between and among individuals, communities and the government, particularly the public health system. If, instead, technologies are used to punish and stigmatise individuals, there can be no expectation of such cooperation.
Building and deploying technologies without transparency, or without involving communities in understanding its functioning and its limitations will deepen the crisis of trust between citizens and the government. Similarly, using these technologies to increase policing, surveillance, and stigmatisation, will mean that individuals may choose to hide their health status or travel history from health authorities, putting themselves and others at risk, and ultimately hampering the collective efforts against the pandemic.
Mitigating this crisis of trust requires designing our legal and our technical systems in ways which prioritise democratic control and individual autonomy. Various legal systems are attempting to develop norms around the deployment of these technologies, focussing on building privacy and trust.
The European Union has encouraged transparent, voluntary, decentralised and privacy-preserving mechanisms like the open DP3T protocol, which ensure that the only data gathered from the apps is that which is strictly necessary for individuals to identify whether they have potentially contacted a CoViD-positive individual, and which allows individuals to determine how to use such information, including whether to share it with public health authorities. The Government of Australia and independent lawyers in the UK have proposed temporary legislation for enhancing transparency and trust in the use of CoViD surveillance technologies, such as legal mandates which ensure the independent oversight of the technologies, that data is used only for public health purposes, and that the surveillance tools are dismantled once the pandemic is over.
In India, at present, there is no framework which controls the use of surveillance or decision-making technologies in this context, particularly when deployed by the police or within government systems. Instead, apps like Aarogya Setu are reliant on privacy policies which, in the absence of a legal framework, have little legal authority and are difficult for a common citizen to enforce. It is therefore imperative that governments at the state and central level enact temporary legislation which governs and limits the deployments of these technologies. At the outset, any intervention based on digital surveillance must take into account the limitations of such technologies, must be strictly deployed within public health systems instead of the security and policing apparatus. Legal frameworks establishing independent and routine audits of these technologies can ensure transparency and efficacy of these technologies. A legal framework must incorporate norms of non-exclusion by ensuring that viable non-digital alternatives exist to any essential and pervasive digital intervention, including identification of affected individuals for testing or medical intervention, or for controlling movement and access to government services.
Legal frameworks must prioritise transparency by establishing what information about individuals is sought to be collected, and establishing a legal obligation to only use such information within the public health system, which can safeguard against its function creep. Finally, the law must establish the temporality of these measures through ongoing parliamentary oversight or a ‘sunset’ provision, which ensures that the surveillance measures are not continued beyond the period of the pandemic.
Digital technologies have been a tremendous resource for society in this time of crisis – allowing communities to build solidarity and offer mutual aid, and allowing us to continue social ties in the midst of a pandemic and a lockdown. However, we must be vigilant against the misguided reliance on technologies which exclude and which punish, which will imperil not only our responses to the pandemic, but the democratic values we cherish and savour.
Divij Joshi is an independent legal researcher.