Hours before May 26, 2021, the day on which the new Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules 2021 came into force, there was a slew of media reports and posts on social networking sites and messaging services that platforms such as Twitter, Instagram and Facebook would be banned the next day.
Aside from some Indian platforms like Koo, most large social networking sites had not yet complied with the requirements under the IT Rules, 2021. Days after May 26, some platforms have publicly announced that they are taking steps to comply with the new rules, while at least one platform has challenged the constitutional validity of some provisions of the IT Rules, 2021.
With much of India’s rebranded transactional foreign policy likely to be focussed on strengthening the US-India relationship for some time – at the time of writing this, the Indian foreign minister S. Jaishankar is in the US to discuss, among other things, vaccines for India – it is unlikely that any immediate bans will arise out of this non-compliance by American Big Tech companies.
However, the IT Rules achieve their primary objective – to hand the Indian state a vital legal weapon in the gradually escalating battle with Big Tech companies. The speculation around banning of platforms, and ongoing showdown between Twitter and MEITY are but important side plots to this larger story.
Taming Big Tech
Since the revelation about Cambridge Analytica’s use of Facebook to profile and manipulate users with political content, the Indian government has been engaged in a series of ad hoc communications with large Internet intermediaries. In July 2018, IT minister Ravi Shankar Prasad, in a speech in the Rajya Sabha, warned that social media platforms could not “evade their responsibility, accountability and larger commitment to ensure that their platforms were not misused on a large scale to spread incorrect facts projected as news are designed to instigate people to commit crime.”
More ominously, he said that if “they do not take adequate and prompt action, then the law of abetment also applies to them”. The minister was speaking in response to the rising incidents of mob lynchings in India, ostensibly occasioned by the spreading of misinformation inciting violence on social media and messaging services. Comparing social media services to newspapers, Prasad further said that when there is provocative writing in newspapers, the newspaper could not say that it was not responsible.
Since that speech, we have seen a variety of policy proposals that ostensibly seek to hold the unbridled power of Big Tech companies to account. These include the strict data localisation requirements under the older versions of the data protection legislation, now significantly watered down; proposed requirements under the e-commerce draft policy for companies to give speedy access to data to law enforcement agencies as well as for the development of any industry; proposals for greater access to non-personal data held by intermediaries. The IT Rules 2021 are the latest addition to this mix of policies.
At the centre of these policy measures is the growing narrative of ‘data colonialism’. Users in the global South generate data, which platform companies analyse and process in their home jurisdiction, reaping its economic dividends and skirting regulatory scrutiny from the other states where they operate. This behavior of Big Tech companies has been likened to private players that served as catalysts of colonialism in the past. However, it appears suspicious when the ‘data colonialism’ narrative is championed by Mukesh Ambani, the richest person in the country, and Nandan Nilekani, the influential tech czar.
Upon closer look, what these policies appear to do is to wrest control away from Big Tech companies, but instead of exploring means to redistribute that power to the users, they provide greater power to the state and large local companies.
Such a dynamic is visible in the immediate case: the Indian state has contended that Twitter’s labelling of a BJP spokesperson’s tweet as ‘manipulated media’ would compromise an ongoing police investigation. Twitter’s labelling of the tweet seems to be based on a fact-checking website whose investigation suggested that a part of the tweet appeared to be manipulated. The lack of transparency about the due process undertaken by Twitter to ensure that the decision was in line with its community guidelines highlights one of the most important issues we are facing in content regulation.
Platforms have far too much power, and operate in a state of opacity that prevents complainants and respondents alike, as well as the general public from being able to understand how and why it takes decisions which have an impact on freedom of expression.
However, the government’s claims also contradict its own regulatory positions. First, while there are legal provisions that allow the government to issue requests for content takedown, there are no legal provisions under which it can seek removal of a label like ‘manipulated media’. Second, when the focus of regulatory efforts has been to impose greater obligations on platforms to regulate harmful speech, it makes little sense to claim that they must always wait for police investigations to conclude before responding to hate speech or misinformation.
The use of visits by a Special Cell of Delhi Police to deliver notices at Twitter’s office continues the trend of ad-hoc regulatory action often untempered by the needs for proportionality. It is not uncommon or unreasonable for regulators to demonstrate their enforcement powers as means to extract compliance from large companies. However, such actions must clearly flow from the rule of law, procedural fairness and in order to be both fair and effective, follow a regulatory pyramid of escalating sanctions rather than resort to the most obvious form of regulatory intimidation.
The unintended consequences of such rash measures are immense and the stakeholders who bear its fallout the most are ordinary citizens. In this case too, there has been little effort to strike at the root of the regulatory problem, i.e. the lack of transparency. The obvious result of such measures would be more risk averse behavior from platforms to avoid statutory liability and evade regulatory scrutiny, with adverse impact on my and your free speech online.
Amber Sinha is the executive director of Centre of Internet and Society. The author is grateful to Gurshabad Grover for his feedback and editorial suggestions.