A few weeks ago, Bollywood actor Farhan Akhtar called out Twitter’s top management for not doing enough to punish those using its platform to make rape and death threats. Just ten days later, journalist Rana Ayyub wrote a bone-chilling account in the New York Times about the tsunami of violent threats, hate speech and rape threats that she experienced on social media.
The topic of hate speech online is not new – Indian journalists like Swati Chaturvedi, author of I am a troll, have repeatedly pointed out the existence of organised and paid ‘troll armies’ that are used to attack journalists on social media platforms, using the most vile and disgusting language possible. The atmosphere of hate online ends up creating a sense of fear in the targets of such speech and in some cases, will have a chilling effect on those wanting to participate in online debates.
How do we solve this problem?
Issues like hate speech are complex and there is no silver bullet to cure the problem. In the immortal words of Taylor Swift – ‘Haters gonna hate’ but that does not mean the law cannot do more to make it difficult for the ‘haters’ to spread their hate. In specific, we need to debate the current law on intermediary liability which shields social media platforms like Twitter and Facebook from any kind of liability for the acts of their users. The degree of immunity offered by the law directly impacts the willingness of social media platforms to invest in the resources required to police their platforms. Offer them enough immunity and they will not feel the need to invest in content moderation – take away the immunity and they will have no choice but to invest in better quality content moderation.
The evolution of intermediary liability
The traditional model for the publishing world in the ‘brick and mortar’ world has always extended legal liability to the publisher, the editor, the printer and the writer. The rationale for this model of legal liability was to penalise all persons presumed to have ‘knowledge’ of the unlawful content because of their role in the publication process or due to the fact that they profit from the said publication. Thus, although only the writer and editor may actually read the content, the publisher and printer are presumed to have knowledge of the content because they profit from the sale of publication.
Internet platforms, also known as internet intermediaries, like Facebook and Twitter have managed to negotiate different standards of legal liability because the internet is a radically different technology that allows for users to instantaneously publish on platforms reaching out to millions across the internet. In their early, start-up days, if these platforms were required to vet each piece of information generated by every user before it was published, they would have had to hire a huge staff, the cost of which would have made their startups unviable.
The logical middle ground was to create new rules of legal liability for these intermediary platforms exempting them from liability until such time they had actual ‘knowledge’ of the illegal content on their platform, usually through a user alerting the platform. Once the intermediary has knowledge of the illegal content, legal liability would fasten onto them like traditional publishers, after a safe harbour period of a few hours, during which time they could remove the content and escape any liability.
The evolution of intermediary immunity in India
Like other countries, India too offered intermediaries limited immunity with the enactment of Section 79 of the Information Technology Act. The original Section 79, as enacted in the first version of law in 2000, was rather generous to intermediaries providing them immunity in cases where they had no knowledge of the content or in cases where they had exercised due diligence.
In 2004, the MMS scandal involving the sale of a pornographic clip on Bazee.com led to the arrest of the CEO of the company and prompted a relook at the Information Technology Act. The government constituted a committee to re-examine the law. The committee, which included industry representatives from NASSCOM recommended that intermediaries be offered immunity for their users unless there was evidence that the intermediaries had ‘abetted’ or ‘conspired’ with the user or if they had received actual knowledge or a government notification about illegal content. This recommendation was accepted by the government which introduced an amendment bill in parliament in 2006. However, when the Bill went to the Parliamentary Standing Committee for review, it was opposed by the Central Bureau of Investigation (CBI).
In its final report, the Standing Committee recommended a legal requirement for intermediaries to conduct their ‘due diligence’ in order to enjoy immunity. The government accepted the recommendation thereby opening the door to uncertainty because not only did it retain the ‘actual knowledge’ requirement, it also required ‘due diligence’ on behalf of the intermediary and to make matters worse, did not define ‘due diligence’.
This had consequences because some judges subsequently interpreted the phrase ‘due diligence’ to mean pre-screening of all content by intermediaries. Apart from the ‘due diligence’ requirement, the provision also required intermediaries to observe guidelines prescribed by the Central government. As per these guidelines, if a user notified intermediaries like Twitter or Facebook about unlawful content on their website, the platforms would have 36 hours to remove content. If the content is removed, they escape liability from any legal proceedings. Alternatively, the platform could continue to host the content if it disagreed with the user’s claim that the content was illegal, in which case it risked being impleaded into future legal proceedings.
The Shreya Singhal judgment and its effect on intermediary liability
This model put in place by parliament changed with the Supreme Court’s judgment in the case of Shreya Singhal vs Union of India (2015). While that judgment is in the limelight primarily because of the infamous Section 66A of the IT Act, it also dealt with Section 79. The constitutionality of the provision had been challenged by the petitioners and while the Supreme Court declined to strike down the provision, it did ‘read down’ the provision to shield internet intermediaries from taking down any content unless a court order was served on the intermediaries.
By ‘reading down’ a provision, the court preserves the provision but essentially changes the manner in which it is understood. In this case, while a literal interpretation of the provision clearly required intermediaries to exercise due diligence and take down content once they had actual knowledge of the same, the court ‘read down’ the provision to mean that intermediaries were required to take down content only when a court has ordered them to do so.
The demand for such added protection to the intermediary was fueled by concern that left to their own judgment, an online platform like Facebook or Twitter would prefer to simply take down any content that was objected to by a user without actually assessing the matter on merits. This, according, to the critics of Section 79 would lead to the possibility of over-censorship by internet intermediaries. The argument is however fallacious because the same logic could extend to publishers and printers in the ‘brick and mortar’ world, yet we hold these publishers and printers to account because of the simple logic that people profiting from an act have to bear the consequences of their profit-making venture. This principle of vicarious liability is the foundation of civil liability.
The problem with the Supreme Court’s ruling on Section 79 is twofold – first, is the fact that the court does not really provide very cogent reasoning as to why it was reading down the provision. Section 79 never criminalised any speech, it only offered limited immunities to intermediaries for speech by its users. Yet the court does not seem to engage with this fact and embarks on the following consequentialist line of reasoning to justify its decision to ‘read down’ the provision:
“This is for the reason that otherwise it would be very difficult for intermediaries like Google, Facebook etc. to act when millions of requests are made and the intermediary is then to judge as to which of such requests are legitimate and which are not. We have been informed that in other countries worldwide this view has gained acceptance, Argentina being in the forefront.”
As obvious from a reading of the above paragraph, there is no legal reason provided by the court. Inconvenience to Google and Facebook or the legal position in other countries is no reason to read down a law passed by a sovereign parliament. Any reason has to be grounded in the Indian constitution.
The second problem with requiring a prior court order for taking down content is that it is quite unreasonable to expect a common man in India to file a lawsuit before the civil court every time they wanted illegal content to be removed. The cost and time implications of such litigation would deter most common citizens from ever approaching a court and their only other option is to use the tools provided by Facebook and Twitter to remove offensive content. But this now means they are at the mercy of Facebook and Twitter.
A judgment from the Hyderabad high court in 2016 dealing with a defamation case filed against Google for content hosted on its blogging platform, acknowledged the immunity afforded to intermediaries post the Shreya Singhal case but also noted that due to the slow pace of the judicial system, “the present law under Information Technology Act is not able to provide such immediate reliefs to the person aggrieved by such defamatory or sexually explicit content or hate speeches etc.” The court recommended that the law be amended to better safeguard public interest.
Notwithstanding the new position in law after Shreya Singhal, intermediaries like Twitter and Facebook do take down content without court orders when informed by users but it is now optional as a result of which it simply isn’t a top priority. The current position of law allows these platforms enough leeway to hire fewer content moderators and invest less in systems and resources geared towards cleaning up illegal content because if there is no legal liability there is no incentive to police their platforms more thoroughly. The end result is simple, higher profits and a thriving culture of hate.
Restoring Section 79 and ensuring diversity of content moderators
If Indian users are feeling overwhelmed by the steady stream of online hate, to the extent that it is chilling their online speech, it may be time to restore Section 79 to its original position before it was read down by the Supreme Court in Shreya Singhal. The section as worded in the 2006 amendment bill, without the due diligence requirement inserted into the law subsequent to the parliamentary committee’s intervention, is an ideal provision of law worth considering.
The fear that these platforms may engage in over-censorship is valid. In any case, these concerns can be tackled by hiring well-trained content moderators who are conversant with the politics, language and sociology of a vast country like India. Hiring moderators from the upper-middle class, dominant caste population that resides in urban India isn’t going to cut it this time around. Activists from across the world have been demanding that Facebook, in particular, develop region-specific moderation guidelines rather than follow a universal policy that doesn’t account for region-wise differences. Hiring better-trained moderators will cost money and if Facebook and Twitter can’t afford the cost, they should be made to stand in the dock when along with the purveyors of hate and rape threats who use their platform.
Prashant Reddy T. is an assistant professor at the National Academy for Legal Studies and Research (NALSAR), Hyderabad where he teaches IP law and administrative law. He is also the co-author of Create, Copy, Disrupt: India’s Intellectual Property Dilemmas (OUP)