New Delhi: Anyone who has spent any time on Twitter knows how rampant abuse and trolling is on the platform, particularly towards women. In India, journalists Rana Ayyub, Barkha Dutt and others face a constant stream of abuse.
To get an idea of how bad the hate can get, JNU student and activist Shehla Rashid, in November deactivated her account since the “toxicity and negativity” had started affecting her mental health.
“After eight years of using Twitter, I’m deactivating my account. I’ve tried engaging, I’ve tried blocking/reporting but the amount of toxicity and negativity that exists out here is insane. I can’t deal with such hate, lies and manipulation. Thanks to all of you who’ve been supportive,” Rashid had said.
Rashid said that there is a “chalta hai” attitude here and the Twitter accounts are not suspended very proactively.
Amnesty International has long been calling out Twitter for its lackadaisical attitude towards such abuse on the platform. More so, Amnesty considers abuse towards women on Twitter a human rights issue.
More so, it has repeatedly called on Twitter to release “meaningful information about reports of violence and abuse against women, as well as other groups, on the platform, and how they respond to it”.
Also read: How Twitter Plans to Fight Against ‘Trolls’
In May 2018, Jack Dorsey had said: “We want to take the burden of the work off the people receiving the abuse or the harassment”. Past efforts to fight abuse “felt like Whac-A-Mole,” he added.
In a poll commissioned by Amnesty in 2017 and carried out in eight countries by Ipsos MORI, 23% of women surveyed across all countries said they had experienced online abuse or harassment.
But after Amnesty made it protests vocal, joining a chorus of thousands of women who face online abuse, it came to realise that it would have to take it upon itself to try and make a change by making the data available. This week, it launched an interactive website detailing the results of a crowdsourced study into harassment against women on Twitter.
“We have built the world’s largest crowdsourced data set about online abuse against women,” Milena Marin, senior adviser for tactical research at Amnesty International, said in a statement. “We have the data to back up what women have long been telling us – that Twitter is a place where racism, misogyny and homophobia are allowed to flourish basically unchecked.”
The project was undertaken in partnership with an artificial intelligence agency Element AI.
In all, the study “surveyed millions of tweets received by 778 journalists, from publications like the Daily Mail, Gal Dem, the Guardian, Pink News, the Sun in the UK and Breitbart and the New York Times, and politicians from the UK and US throughout 2017 representing a variety of political views, and media spanning the ideological spectrum”.
The study found:
1. 7.1% of tweets sent to the women in the study were “problematic” or “abusive”. This amounts to 1.1 million tweets mentioning 778 women across the year, or one every 30 seconds.
2. Women of colour, (black, Asian, Latinx and mixed-race women) were 34% more likely to be mentioned in abusive or problematic tweets than white women.
3. Black women were disproportionately targeted, being 84% more likely than white women to be mentioned in abusive or problematic tweets.
4. Online abuse targets women from across the political spectrum – politicians and journalists faced similar levels of online abuse and we observed both liberals and conservatives alike, as well as left and right leaning media organisations, were targeted.
In March 2018, Amnesty had released a report titled ‘Toxic Twitter: Violence and abuse against women online‘, which in detail narrated the human rights abuses women face on Twitter.
The report also put forward solutions and steps that could be taken by Twitter to help fight the problem.
On the interactive website, Amnesty says:
The report found that as a company, Twitter is failing in its responsibility to respect women’s rights online by failing to adequately investigate and respond to reports of violence and abuse in a transparent manner which leads many women to silence or censor themselves on the platform.
Vijaya Gadde, the legal, policy, trust and safety global lead at Twitter, said in a response to Amnesty that the company desires a healthy and transparent discourse.
“Twitter’s health is measured by how we help encourage more healthy debate, conversations, and critical thinking,” Gadde said. “Conversely, abuse, malicious automation, and manipulation detract from the health of Twitter. We are committed to holding ourselves publicly accountable towards progress in this regard.”
The Troll Patrol itself, which included more than 6,500 digital volunteers – aged between 18 and 70 – from around the world, analysed and “288,000 unique tweets to create a labelled dataset of abusive or problematic content”. The identity of the person that tweeted those tweets were not revealed, and the volunteers simply had to answer questions about whether the tweets were abusive or problematic in any way, and also “whether they revealed misogynistic, homophobic or racist abuse, or other types of violence”.

A screenshot of the questions asked of the Troll Patrol. Credit: Amnesty
Each tweet was analysed by multiple people. The volunteers were given definitions and examples of abusive and problematic content, as well as an online forum where they could freely discuss the tweets.
“We found that, although abuse is targeted at women across the political spectrum, women of colour were much more likely to be impacted and black women are disproportionately targeted. Twitter’s failure to crack down on this problem means it is contributing to the silencing of already marginalised voices,” Marin said.
Also read: Delhi Journalists Body Condemns Relentless Trolling of Rana Ayyub
“Amnesty International and Element AI’s experience using machine learning to detect online abuse against women highlights the risks of leaving it to algorithms to determine what constitutes abuse,” the report concludes.
On Tuesday, Amnesty and Element AI unveiled a machine-learning tool that aims to automatically identify abusive tweets. The researchers have said that it works well, but is still far from being fully accurate.
“It still achieves about a 50% accuracy level when compared to the judgement of our experts,” the report states, “meaning it identifies two in every 14 tweets as abusive or problematic, whereas our experts identified one in every 14 tweets as abusive or problematic”.