The 37th Dr. Ramanadham Memorial Meeting, organised by the PUDR, has brought us all together this evening. It is a pleasure to be in the company of Jinee Lokaneeta, whose book Truth Machines delves into the intersection of science, criminal investigation, and state violence. It poses troubling questions on how lie detectors and narco-analysis offer a pseudo-science that appeals to the biases of investigating officers, prosecutors and even our judges. My own work, first with a private practice and then for broader policy change through the Internet Freedom Foundation, centres on the effects of technologies on human freedom and social justice.
Based on my working experience, I intend to explore the ways in which ‘Digital India’ has transformed the landscape of policing and criminal justice in Delhi. I believe I can speak with some confidence, if not authority on it, given it is the city of my birth. Delhi has been my home for the past 38 years, a place that has given me tough love, always challenged me to grow and adapt. As I’ve grown older, I’ve witnessed firsthand the transformation of the city’s landscape, from the construction of new buildings and roads, to the changing state of parks, even how the Ashok, Gulmohar and Jamun trees that dot the streets have grown, shaped and wilted. One development that has caught my attention in recent years is the proliferation of CCTV cameras on our streets, often mounted on trees. Quite often I have noticed someone standing under its gaze held in the hypnosis of a smartphone.
Understandably, many find such forms of surveillance reassuring. It brings comfort from a constant sense of anxiety in an unsafe and aggressive city. So, when I encounter everyday surveillance, try as I do to reconcile myself, I can’t help but feel uneasy. Our chief minister has stoked this populist sentiment with billboards announcing CCTV deployment. In December 2021, he announced the installation of 140,000 cameras boasting that “[w]ith 4-megapixel resolution and night vision capabilities, these cameras are meant to increase the sense of security in the city. We are far ahead of London, New York, Singapore, and Paris, to the point that there is no comparison.” It leads me to question whether this massive deployment of digital technology truly makes us safer, or if it simply gives us the illusion of safety. From a project development perspective, for a poor country like India, is this expense justified? Or, an inquiry more concerned with today’s meeting, does it offer us a chance at justice?
“What do judges know that we cannot tell a computer?”
I believe some answers may be found from trends emerging two decades ago. At a time when, “penetration of Broadband, Internet and Personal Computer (PC) in the country was 0.02%, 0.4% and 0.8% respectively at the end of December 2003.” At this juncture, the Justice Malimath Committee on Criminal Justice Reforms identified two major issues facing the criminal justice system: a high volume of pending criminal cases and long delays in their disposal, as well as a low rate of conviction for serious crimes. It advocated for the use of technology, noting, “If the existing challenges of crime are to be met effectively, not only the mindset of investigators needs a change but they have to be trained in advanced technology”. Its other recommendations also included, for the police to, “develop and share intelligence tools and databases, which would help investigation and prosecution of cases.” Its concluding recommendation on the last page ends with stating, “Society changes, so do its values. Crimes are increasing especially with changes in technology.”
A few years later, Madhava Menon Committee’s report on a Draft National Policy on Criminal Justice provides a clearer view of the anticipated impact of digital technologies in policing. The report contains a separate chapter on “electronic surveillance and use of scientific evidence” and notes “Developments in Science and Technology (S&T) have both positive and negative implications for the crime and justice scenarios. S&T can help solve more efficiently the problems and challenges of crime particularly those perpetrated with technological tools and devices.”. This framing of technology as both an opportunity and a risk highlights the role of the state, including the police and judicial authorities, as the primary institutions but also the stakeholders in this issue. This framing also implies a bias in favour of technology, which is a form of determinism in the sense that the adoption of technology imparts objectivity, efficiency and effectiveness in the processes of criminal investigation and trial. I believe such a framing lacks foresight on how technology by itself may be used to deepen police abuses and erosion of democratic freedoms. I wonder if these committees headed by legal luminaries may have benefited from reading the debate between one of the founders in the field of Artificial Intelligence that is the logical conclusion of a technology fetish. As John McCarthy provocatively inquired, “What do judges know that we cannot tell a computer?”
Policy documents do not turn by themselves into practiced realities. Hence, it is obvious that digital technologies for policing could not be provisioned without their development and availability. Answering, even at times prompting a need, private industry stepped up during this period. This is reported in rich detail by Manisha Sethi where she details the offering of surveillance technologies by national and international vendors at the Homeland Security Fair organised in Greater Noida. She further notes that the principal response by the FICCI Task Force to the tragic Mumbai terror attacks in 2008 was, “invasive technology”. As per FICCI and Ernst & Young, the foundation for smart and safe cities is, “totalising surveillance technology including centralised data system, GIS-based analysis and reporting, and the ubiquitous CCTV camera.” Today, this technology determinism has only accelerated with phrases such as, “predictive policing” entering the policy lexicon. But, before we step into these magical wares, I believe it is important to consider the use of digital technologies more widely from the site of violence for it closely intersects with policing responses.
More toxic than the Ghazipur landfill
Here, the lawlessness that occurred in North East Delhi between February 23 and 26, 2020, serves as a case study of the role of digital policing and criminal justice. As per a Right to Information (RTI) request answered by the Delhi Police, it resulted in 53 deaths and 581 injuries, with 754 FIRs and 1,369 arrests. The high teledensity in Delhi, with an estimated 2.8 connections per person, meant that many residents were likely using social media platforms such as Facebook and WhatsApp at the time of the incident. It is important to understand their role in the cycle of hate speech and provocation that led to the events of February. A fact-finding report by the Constitutional Conduct Group concluded that “the use of social media platforms are often part and parcel of episodes of violence, whether through spreading false rumours, circulating offensive inciting tropes, or facilitating the conduct of violent acts.”
I posit that social media serves as the central hub, a backbone, for a ubiquitous digital media ecology that shapes the experiences of many individuals in Delhi. To fully comprehend the cycle of hate speech and provocation that culminated in the events of February, it is crucial to grasp the broader digital media ecosystem that is rapid, ever-present, and influencing our thoughts. This ecosystem is driven by constant connectivity, algorithms that hack away at our cognitive biases. It creates factions and tribalistic tendencies to serve a reality that is closer to our imagination. Here powerful tools of both speech and censorship that are legal and technical, constantly curate the information that we consume. It cocoons our understanding. Such is its hypnotism and a forced need for social participation that even those who clearly see its harms cannot help themselves. They may refer to it as a “hell site” but each day they will choose to be its resident. Somewhat similar to an urban waste landfill like Ghazipur. Constantly ablaze and releasing a toxic fume. Yet, we cannot help but refresh our feeds and touch the smartphone every few minutes. Almost like a nervous tick.
The flood of digital media we encounter is paradoxically also rationed as per qualitative assessments. Here, specially in post-colonial societies such as India, censorship continues to control the medium. This can result from absolute measures that can result in complete information blackouts. These are commonly referred to as internet shutdowns and are the most severe form of internet censorship. It is practised frequently in India not only in border states but even in the national capital. So for instance, while the state did not shut down the internet during the North-East Delhi riots, the Delhi Police did shut down the internet during the farmer protests on Republic Day. Here, evidence is often disregarded, and contrary to their intent the impact of internet shutdowns may be counter-productive during periods of mass violence as research suggests that, “violent mobilisation seems to grow in intensity during blackouts.” Hence, it may be important to ask, who do such blackouts protect? Even besides their questionable utility, legality is absent. Transparency measures, as directed by the Supreme Court in the case of Anuradha Bhasin vs Union of India, where internet suspension orders are published and indexed remain unfulfilled. The Parliamentary Standing Committee of IT recorded that reportedly the government has imposed 518 internet shutdowns in India between January 2012 and March 2021, the highest number of internet blockings in the world. However, there is no mechanism to verify this claim, as both the Department of Telecommunications and the Ministry of Home Affairs do not maintain records of internet shutdown orders by the states. It leads me to wonder, is the statistic too embarrassing for it to be officially counted?
Another form of online censorship is blocking web content that becomes inaccessible to internet users in India. However, these directions are often made in secret, and entire accounts are blocked, rather than individual pieces of content, which prevent any natural justice. Just take the instance of Twitter, where the Government directed the blocking of only 8 URLs in 2014, to when in 2020 and 2021 it directed the blocking of 2,731 and 2,851 accounts respectively. These blocking regulations were first challenged before the high court of Delhi and now are pending before the high court of Karnataka. Website blocking is routinely exercised routinely as a policing power for reasons extraneous to the reasonable restrictions under the constitution of India. For instance, the Delhi Police has sent notices under laws such as the Unlawful Activities Prevention Act (UAPA) that caused domain registrars to block the websites of young environmental campaigners. The crime of many of these teenagers was to facilitate emails to the Ministry of Environment & Forests against the dilution of the environmental impact assessment norms. After lawyers at IFF provided them representation, these notices were recognised as erroneous and the references to the UAPA were attributed to a typographical error.
Is it possible to stop and think before we like, share and subscribe?
The third is probably the most voluminous and significant form of online censorship. It is directly implemented by social media and messaging platforms such as Facebook, Instagram, Twitter and YouTube. When they permit, prefer and prohibit content based on a labyrinth of backend choices in their business and platform policies they determine what we get to perceive, think and react. This is implemented through platform features that provide the architecture of our digital environments. Once within these worlds, our interactions are processed through artificial intelligence that utilises technologies such as NLP (Natural Language Processing) models that may conduct content moderation. However, I believe what is more important are human reviews and the ability for discretionary decision-making.
Much of content moderation by online platforms may be argued to be desirable for it improves our media engagements by showing us content relevant to our interests or prevents prohibited speech such as child sexual abuse, racial and religious threats but this is an imperfect system with wide discretion for content moderation and policy teams. In weak rule of law societies that are rife with communal tensions, it can make decision-making susceptible to extraneous influences. For instance, as per the Wall Street Journal‘s exposures of internal correspondence by Facebook, its India policy team prevented the application of its content moderation policies citing staff safety and business impacts even in instances of hate speech. It is not without reason that the Delhi government through the Committee for Peace and Harmony inquired into Facebook’s role in the North East Delhi Violence. Its summons was disputed by the chief executive of Facebook before the Supreme Court, which upheld the committee’s power to direct attendance. However, after much legislative theatre, with its proceedings being streamed live on YouTube and even on Facebook, little has been achieved. When they finally appeared, Facebook’s representatives were evasive and the Committee has been inactive since 2021. Today, its chairperson is a Member of Parliament and hence it will need to be reconstituted. Nothing tangible has been achieved in terms of actual transparency or accountability where the more probable outcomes have been a clearer limitation on the powers of the Delhi assembly due to a Supreme Court judgment. We have largely witnessed political showboating and some level of public awareness due to press reports and live streams.
While the role of social media in fomenting civil division is globally recognised, it bears repetition that it presents a valuable opportunity for volunteer groups to provide aid to those affected by the violence. I remember a late night, answering a call from a young organiser from Twitter. I joined a group of lawyers in visiting a makeshift relief centre near ITO late at night. There, we coordinated the distribution of food and medical supplies to families living in areas under curfew. It was our hope that, as lawyers bearing parking stickers on our cars, we would be granted safe passage to deliver these supplies to the localities or, at the very least, to the local police stations. I mention this since it’s easy to fall into the trap of demonising social media, but as Facebook whistleblower Francis Haugen has reminded us, “a safer, more enjoyable social media is possible”. The advance of human societies is based on fraternity and kindness. However, the current incentives for social media companies are centred around power and profit. Despite this, I saw firsthand in 2020 how a rag-tag group of volunteers were able to harness the power of social media for the betterment of their fellow citizens, raising funds for victims of the violence. Unfortunately, even these efforts were not immune to the divisive forces of communalism, with fundraising efforts divided along religious lines.
Public awareness brings me to the fourth important pillar that can manufacture distraction and division through information floods. As Zeynep Tufeqi notes:
“In the twenty-first century and in the networked public sphere, it is more useful to think of attention as a resource allocated and acquired on local, national, and transnational scales, and censorship as a broad term for denial of attention through multiple means, including, but not limited to, the traditional definition of censorship as an effort to actively block information from getting out.”
Here certain platform features, like an automated news feed, or specifically, how Twitter Trends show hashtags based on the number of users that post can lead to public discourse being manipulated. During a riot situation, trending topics that call for violence and increase a media hysteria that may often be manufactured by a mix of well-funded automated and human interventions driven by political parties and their associates to cause social polarisation. Some of the tweets from these topics are pasted on online collaboration tools such as Google Docs, then sent as templates to groups made on messaging platforms such as WhatsApp and Telegram. This cross-platform flood invariably enters our smartphones and through a persistent ping draws us into a communal echo chamber. Here even for passive participants like most people, the citizenry that calls itself apolitical there is little time to pause and consider. As Hannah Ardent put it, “any relentless activity allows responsibility to evaporate. There’s an English idiom, ‘Stop and think’. Nobody can think unless they stop….. It [responsibility] can only develop in the moment when a person reflects—not on himself, but on what he’s doing.”
This relentless flood, where the nature and scale of manufacturing content are supported by structural forms of power pulpits, legal disorders, private profit and technical measures. They quite often end up promoting a baser, tribal form of civic engagement that advocates social aggression as was noted by the fact-finding report of the Delhi Minorities Commission, “Many of the perpetrators live-streamed the attacks on social media, and uploaded videos of themselves committing violence. They were all doing it all in the open and proudly.”
For me, the fifth primary path how the online information ecology is curated is through legal threats under civil and criminal laws. This is specifically addressed to those who believe that the harms of social media will be solved by a legislative tonic of harsher laws and penalties on people and even platforms. Just look at our existing experience. Today complaints and cases are filed through private individuals, semi-public personalities that act as nodes and even police departments. It may lead to cases and prevent the suppression of reports that are critical. Here, a weak rule of law framework and substantive provisions such as sedition, or even obscenity and hate speech provide enough grounds for police departments to demand censorship and affect arrest. Today, a large chilling effect hangs like the Delhi smog over human rights defenders and local journalists. They are often tagged on the social media handles of police departments by online trolls and even everyday private individuals who may or may not enjoy different forms of political patronage. Censorship is distributed in Digital India and everyone has the right to feel offended. Hence, the internet speech and censorship apparatus, not only violates natural justice, or the public’s right to know – but more importantly in civil unrest prevents accountability on state responses, documentation of violence and coordination of relief.
The internet also is a borg like medium for integrating other forms of media. For instance, in the wider media ecology slivers of propaganda often masquerade as news to amplify and deepen social divisions. Clippings of news anchors who are more akin to stage actors mixed with gotcha memes and a musical score serve as memes to flood social media. The shows may by themselves misrepresent innocuous private conversations exchanged on encrypted platforms such as WhatsApp and may even be a part of case evidence. When confronted with the journalistic ethics involved they preemptively cite defences of shooting the messenger. Only, now the messenger is no longer the carrier pigeon but a vulture. This completes one of many cycles in our media ecology. Many different forms of such cycles exist and they buffer continuously within our smartphones.
Data maximisation and constitutional contraction
In this circular exchange of data as packets are being continuously exchanged – just like a letter – there is a need for identification – who sends it and who receives it. However, today, the more important feature in information exchange is – who surveils, intercepts, stores and analyses it. The massive expansion of digital surveillance is based on the foundation, or to be cheeky, the “Aadhaar”, of databasing. State databases can exist in several forms that store our personal details. Without our consent, maybe even without our knowledge.
Experience has shown that in incidents of communal violence, the use of personal data even for a socially beneficial purpose does lead to the targeting of specific groups and communities. Take the morbid example of the 1984 anti-Sikh pogrom in Delhi in which lists were made of residents in localities using voting rolls for the Delhi Gurdwara elections. Today, technology and regulatory frameworks have failed to learn from the past, instead – to use the grammar of start-ups, have scaled up and reduced friction for personal identification. For there exist well developed policy frameworks to create databases at the state and central levels in the absence of an enforceable data protection law. These databases are created for a wide range of functions – to provide welfare and state services, improve administrative efficiency, advance a security apparatus and extract economic value. Such imperatives may form distinct attempts to gather and build indepedent databases but eventually converge with the bundling and collation of personal data. Here many of these databases talk to each other, which is that they share data through common identifiers such as names or the Aadhaar number, which is often framed by policy imperatives as an effort to break “data silos”. Often this is in the absence of any laws that authorise the collection of personal data. Instead, non-statutory policy frameworks hold ambitions to achieve 360-degree surveillance of Indian citizens. Today, with the embrace of “data maximisation” there is a greater belief in computer programming languages than constitutional precedent and institutions of governance. After all, we have been told that the government should behave more like a start-up to solve India’s “grand challenges”.
Sticking to our case study, let me explain this in the context of the violence in Delhi. One of the most prominent instances of the government selling citizens’ data to the private sector has been through the Bulk Data Sharing Policy of the Ministry of Road Transport and Highways. The data sharing policy gave licenses for vehicular data that was held in the Vahan and Sarathi databases. These databases include information on 25 crore vehicle registrations and about 15 crore driver’s licenses. In July 2019, answering a question posed in the Upper House of Parliament, the Union Minister for Road Transport and Highways revealed that the government had earned Rs 65 crore by selling access to the Vahan and Sarathi databases to about 87 private and 32 government entities. Little heed was paid to caution or legality. It was not considered by the government whether such an action required a legislative basis to fulfil the requirement of legality, or even the sale may constitute a commercial use of public data that may be prohibited by the Puttaswamy II judgment. Should we be shocked then that on February 26, 2020, online reports first emerged that the Vahan database was being used to identify the religious identities of persons and their properties? This report was later confirmed by the Delhi Police. After activists and even my own colleagues wrote to the ministry the Apex Committee met on June 4, 2020 via a videoconference and decided to scrap it. The minutes of the meeting noted, “[T]here have been certain issues received in regard to the sharing of data in public, and whether bulk data shared with the stakeholders can be misused.”
Again, did we learn from this? I believe not. Last year, the Ministry for Electronics and IT released the India Data Accessibility and Use Policy that when originally put to public consultation mentioned the sale of both personal and non-personal data. It was accompanied by a background note that uses a $1 trillion target to justify the free sharing of data within government and its enrichment, valuation and licensing to the private sector. What will be the form of this data? While the policy does not specify it, there is some indication in the National Economic Survey, 2019. It devoted an entire chapter to the economic potential of data and stated in its summary stating:
“Governments already hold a rich repository of administrative, survey, institutional and transactions data about citizens, but these data are scattered across numerous government bodies. Merging these distinct datasets would generate multiple benefits with the applications being limitless … The private sector may be granted access to select databases for commercial use … Given that the private sector has the potential to reap massive dividends from this data, it is only fair to charge them for its use.”
In this instance, doing the same thing and expecting different results is not a form of insanity, but a form of technology determinism. Today data sharing policies have been made for both state and the private sector by at least six state governments. I would like to re-emphasise that this is without any legislative authority and in the absence of a data protection law.
Data sharing frameworks and welfare databases such as Vahan and Sarathi play a visible but small part in the larger surveillance glacier. Let us begin with the National Intelligence Grid, better known as NATGRID, that will allow user agencies to access data gathered from various databases such as credit and debit cards, tax, telecom, immigration, airlines and railway tickets, passports, driving licenses among others. It is being developed as a measure to help security agencies such as the Central Bureau of Investigation, Research & Analysis Wing in tackling crime and terror threats in the country. Then for communications surveillance, there is the Centralised Monitoring System (CMS) which is an ambitious surveillance system that monitors text messages, social-media engagement and phone calls on landlines and cell phones, among other communications. It has been set up by the Centre for Development of Telematics and is operated by the Telecom Enforcement Resource and Monitoring cells under the Department of Telecommunications. A truly Chinese form of surveillance is also being attempted with the National Automated Facial Recognition System (AFRS) being developed by the National Crime Records Bureau under the Ministry of Home Affairs. The project aims to develop and use a national database of photographs which is to be used in conjunction with a facial recognition technology system by Central and State security agencies. This is layered with information exchange frameworks such as the Crime and Criminal Tracking Network System (CCTNS) aims to connect police stations across the country to increase ease of access to data related to FIR registration, investigation and chargesheets in all police stations.
What these projects have in common is not only a growing state expenditure for their creation and operation, but the absence of any underlying legal framework. These surveillance projects cannot even be evaluated for their prongs of the Puttaswamy judgment, such as a proportionality analysis for they lack a legislative framework. It’s a sardonic but sorrowful state of affairs that reminds me of the “roll safe” internet meme in which a person holds a finger to their head while flashing a fool’s grin. It is disappointing that their unconstitutionality is restricted to legal analysis in such speeches, or reports by researchers rather than through the judgments of our courts. Here, it may increase our despondence and the political realities of today that even when the legal framework, if any, when passed it may permit uncanalised legal powers such as under the Criminal Procedure Identification Act, 2022 – laws that may convert the de facto into the de juro and make things worse. Presently, proposed enactments such as the DNA Bill provides for mass databasing; the Digital Personal Data Protection Bill contains a compromised regulatory body and wide exemptions for public authorities; the Telecom Bill increases executive powers and evades any parliamentary or judicial oversight on surveillance powers. But, what does this mean in the context of the violence in North East Delhi?
Software does not view religion
The Union Home Minister entered a short duration standing in the Lok Sabha on March 11, 2020 stated, “[T]hrough face identification software, it starts the process of recognizing all the faces. This is software and it does not recognise religion and attire. We have put voter ID card, driving licenses and government data inside this software. Through it, more than 1,100 people have been arrested out of which 300 are from Uttar Pradesh.”
The statement is substantiated by the Delhi Police’s Annual Report in 2020 takes the case of one “Amrudeen” and shows how a CCTV image was put through a facial recognition system and then matched against a driving license and criminal record that contained his photograph. There are further screenshots within the report which state a rioter has been identified on the basis of worn clothes – a yellow-blue jacket. In their own words:
“945 CCTV footage and video recordings were obtained from multiple sources, including CCTV Cameras installed on the roads, video recordings from smart phones, video footage obtained from media houses and other sources were analyzed with the help of video analytic tools and facial recognition systems. The photographs were matched for multiple databases, which included Delhi Police criminal dossier photographs and other databases maintained with the government. This helped identify persons involved in riots, which proved helpful in taking legal action after corroboration with other supporting evidence. Delhi Police also extensively used Artificial Intelligence (AI) based technology for the enhancement of CCTV images for better identification of rioters. The e-Vahan database and the driving license databases were used for further identification.”
Given that the basis of identification is a facial recognition software that is an automated process it becomes important to inquire as to its reliability. Here there exists no public source code repository, independent audit reports or explanations for the AI tools. As per an RTI response, the Delhi Police answered that it considers an 80% match as being positive for the purpose of identification. It is also important to consider that such transparency was resisted until litigated till the CIC and resulted in a reprimand and then disclosure. It is another thing that in 2018, it has stated before the high court of Delhi that the accuracy of the system is as low as 2% for identifying missing children where it could not distinguish boys from girls. It’s important to note that these flawed, probabilistic digital systems are often used as the foundation for criminal investigations and can perpetuate social biases. These legal hazards become even more pronounced when digital tools have a high saturation in areas with higher populations of minorities, as historical data based on policing practices shows that these areas often have a higher incidence of crime. As researchers from the Criminal Justice and Police Accountability Project remind us that what we are witnessing today are settled habits and new tricks.
But as the Delhi Police has stated facial recognition is only one in many other pieces of, “other supporting evidence”. Would it include videos from the drones that were used? Memberships in WhatsApp groups and chats there which can be taken contextually? Google Maps data and even conventional call data records. What was achieved through the combining of such granular digital evidence?
Let us refer to two pre-existing analyses. First, “according to Millennium Post‘s review of 100 bail orders of the total over 3,500 bails granted so far, the police had cited CCTV and video footage in at least 44 cases to back their allegations against the accused, of which in 32, the video footage did not stand up to basic judicial scrutiny and the accused were granted bail.”
Second, as per an Article 14 report, ‘In a series of recent cases, various Delhi courts have granted bail, while holding the police responsible for “vague evidence and general allegations,” a “shoddy probe”, “absolutely evasive” and “lackadaisical” attitude; the police have been accused by various courts of investigations that are “callous”, “casual” and “farcical”, “poor” or “painful to see”.’ Hence, all such advanced video recording and facial recognition technologies have not provided a sense of truth nor justice but have led to undertrials serving long periods in imprisonment.
Even during the conduct of these bail proceedings, technology has played a vital role in appealing to our biases. There have been allegations of selective, advanced leaks of chargesheets and confessional statements that have fed a digital media cycle and thereby prejudiced court proceedings. The prosecution has often given detailed PowerPoint presentations rather than credible evidence to demonstrate the thoroughness of police investigations. After all, and I repeat, “What do judges know that we cannot tell a computer?”
Delhi is not the endgame
Deservedly, for all its privileges, much of India, including many even who live in it, are rarely fond of Delhi. For most, it has a “bad reputation” – but to borrow from Taylor Swift, maybe it only has a “big reputation”. It is not the only city in India impacted by a digitisation and policing project that is systemic and institutional. It cuts across the political spectrum as technological determinism is deeply ingrained in our cultural and social values.
On a national level, Rs 2,919.55 crore from the Nirbhaya Fund created in the aftermath of the horrific Delhi gangrape was recommended by the Women and Child Development Ministry to be used for the Safe City Project proposed by the Ministry of Home Affairs for CCTVs. Further, the Standing Committee on Home Affairs in its report on ‘Police – Training, Modernisation and Reforms’ on February 10, 2022 advised, “MHA may incentivise states to leverage technologies like artificial intelligence and big data for policing”. The proceedings of the committee make for interesting reading with each city and state police boasting about its use of personal data lakes and artificial intelligence. The effects of such policy nudges are evident across the political spectrum. The Hyderabad Police has constructed a 20-story command and control centre that houses a Chinese form of digital surveillance fed with data gathered from cordon searches. Even the Kolkata and Chennai Police departments are implementing facial recognition systems and populating them with images of persons they deem suspicious. This is in effect nothing more than a form of a general warrant prohibited by a substantive reading of the Code of Criminal Procedure and Supreme Court precedent.
The outcomes, if any, are frightening and clear for me. First, as Jinee’s work shows, there is a bias in our criminal justice systems for pseudo-sciences like lie detectors and narco analysis to confirm the biases of society and the criminal justice system. I fear with digital databasing and identification technologies, the entire chain of evidence may be artificially generated. Here, each stack of probabilistic technology may serve an inductive form of reasoning that will be accepted by our courts. I hope to develop this idea more in future.
Second, is the larger social consequence of digitisation. It inverts the already fraught relationship between the ordinary citizen and the police. For instance, the digitisation of our police services is occurring without its proper benefits being properly implemented. For instance under directions of the Supreme Court in the case of Shafhi Mohammad vs State of Himachal Pradesh and its progeny directed the installation of cameras in specific locations inside the police stations including the entrance/exit, visiting areas, and the lockups. However, as per the Delhi police, the CCTVs in Delhi do not have any audio recording facilities, and as per the India Justice Report, one in three police stations is yet to get a single CCTV camera. In many ways, technology follows a path of power resembling a panopticon where accountability and visibility are towards the subjects rather than those who guard the watchtower. At the same time, evidence continues to be illegally gathered without little or no consequence. Hence, it should not come as a surprise that we have today allegations of the use of malware such as Pegasus and Netwire.
Finally, where does this leave us? Alarm? Lament? Resignation? As Andre Gide has been quoted by Kannibiran in Wages of Servitude and in the preface of Justice Malimath Committee’s Report, “Everything has been said already; but as no one listens we must always begin again.” I believe many of the outcomes we have searched for in technology rest in the hard structural reforms that have not been carried out legislatively post-independence. India is still in its journey and still transitioning from a colonial state to one that is not in many ways a constitutional democracy. We must study our expert reports that provide learnings from decades of policing in democratic India. There must be a focus on implementation of Supreme Court judgments such as Prakash Singh for the constitution of accountability committees. There must also be a consideration as to how department budgets are being allocated towards digital vapourware rather than towards more hiring and better training police personnel who work under terrible conditions.
This path is long and the outcomes will come only through sustained advocacy. Even then, they will only come as a trickle. But I do maintain a sense of pragmatic optimism. Annual gatherings such as today give us a chance to reflect. It adds to greater public awareness and in some small measure provides a convening to question a fetish for digitisation in policing. I believe we have the power to cause positive change once we know better. This is also a form of blind faith – a type of determinism in the human capacity to uplift itself. After all, it may be better I believe in the transformative power of human empathy over any form of artificial intelligence. Maybe this is also why, despite being a dystopian hellscape, I continue to love the city of Delhi.
This is the text of the speech delivered by Apar Gupta at the 37th Annual Dr. Ramanadham Memorial Meeting on January 14, 2023 at Gandhi Peace Foundation, New Delhi.
Apar Gupta is the executive director of the Internet Freedom Foundation.