There is still a degree of uncertainty surrounding the outcome of the US elections, even as most signs point towards Democratic nominee Joe Biden becoming the next American president.
But even as the world continues to closely observe what unfolds over the next few days and weeks, the months leading up to the election day hold some lessons that India’s information ecosystem should learn from – and further explore.
Even with a substantial amount of focus on social media and user-generated content platforms in the disinformation conversation, the mass-media and mass-media personalities still have a significant role to play.
In September, when The New York Times published an editorial about the platforms preparedness to deal with a scenario in which Donald Trump prematurely declared himself to be the winner, tech analyst Benedict Evans pointed out that it did not include any references to how the mass-media should report on the same statements.
Similarly, amid calls for Twitter to restrict its ‘trending’ section, others noted the inconsistency of wanting to limit Twitter’s features while mainstream media outlets continued to include ‘lies’ in their headlines (presumably in the form of a direct quote from a candidate).
In the lead up to the elections, studies by the Berkman Klein Center and Election Integrity Partnership contextualised the role of the mass-media model in amplifying disinformation narratives. Together these studies highlighted how the political leadership benefited from mass-media propagating its disinformation for them by merely reporting on “newsworthy” statements, amplifying false claims by repeating them verbatim, extending their lifecycle by giving them space the public conversation.
On November 6 though, for the first time in four years, multiple news networks abruptly cut-off the live feed of a Donald Trump press conference as he continued to make allegations of fraudulent voting.
In India, it is common to live telecast events like election rallies, speeches, carry “direct quotes” as headlines, where politicians make claims ranging from a minor embellishment of the truth to outright dangerous speech targeting minorities. However, there appears to be no established practice of either fact-checking such events or contextualising statements made either in real-time or in the aftermath.
This repetition, sometimes accompanied by co-opting the same terms, even using them as hashtags can lead to the normalisation of hateful rhetoric and dog-whistles. Examples of this include the tendency to promote terms such as ‘Love Jihad’ and ‘UPSC Jihad’ – practices that are yet to be confirmed by strong empirical data – while reporting any related stories.
Setting the agenda
In October, when the New York Post ran the controversial Hunter Biden story amid doubts within its own newsroom, Fox News devoted over 400 segments and 25 hours to it in just nine days. A significant amount of coverage also focused on how Facebook and Twitter responded to it, and because they restricted it – allowed Trump supporters to play the aggrieved party claiming the story was being censored.
Ultimately, it did dominate the news-cycle during a crucial period leading up to the elections.
An egregious example of this ‘agenda-setting’ is something we witnessed recently, as a study indicated that some news anchors allocated over 65% of their debates to the Sushant Singh Rajput case at the expense of discussion on the state of COVID-19, the economy, etc.
Always expect disinformation
Stanford Internet Observatory’s Renée DiResta believes that we need “to adapt to a world of widespread disinformation“. This is true for mass-media most of all. Research from various institutions (Reuters, University of Michigan, Shorenstein Center) has indicated that a significant portion of disinformation is repurposed, out-of-context content rather than synthetic or manipulated content.
Therefore, mass-media needs to account for the possibility that even a small portion of their coverage taken out of the context has the potential to sow confusion.
As an extreme example of this, consider how CNN was exploring “paths to the presidency” for both candidates. The top-right of the graphic contains the text “xyz-wins”. When there has been talk of premature claims of victory for months in the lead up to the election – such phrasing is best probably avoided. TV news production teams need to work under the assumption that malicious actors looking for “cheapfakes” will exploit the smallest opportunity to misuse their coverage for disinformation.
User content and platforms
For many of Silicon Valley’s tech giants, and indeed all user-generated content platforms, the 2020 elections were a chance to exorcise the spectre of the 2016 US election. For now, let’s set aside whether this was out of genuine intent, perception management, response to pressure or a combination of all three.
The most notable shift was the willingness to enforce their own policies. Given the increasing interventionism in the COVID-19 era, it was always going to be untenable for them to step back. In the months since, they have taken action against conspiratorial groups such as ‘Boogaloo Bois’ and QAnon. They have clarified and expanded policies around election integrity, tom tommed their preparedness and in the case of Facebook and Twitter, even flagged posts by Donald Trump on multiple occasions.
And while some observers noted that the day went by relatively uneventfully for social media platforms, it should be pointed out that many academics predicted that the peak of activity would be in the days following the election – and we just simply don’t know enough yet.
On the eve of the election, Twitter took action against a tweet by Donald Trump in approximately 40-45 minutes. The Election Integrity Partnership determined that by then “it had already been retweeted 55K+ times and favorited 126K+ times.” If we assume that all this interaction came from his 87 million followers (which is, a BIG if), that’s the effect that, at the most, only 0.2% of his follower base had in less than 1 hour.
If this was the effect in spite of relatively quick action against an account considered to be a “superspreader” of disinformation – which one would expect would be a subject of increased scrutiny — it is difficult to expect platforms to be able to act quickly against a broad range of dis-influencers.
On election day, Twitter took five minutes to flag another Donald Trump tweet. With the current posting work-flow, it is hard to imagine a quicker turnaround time than that. And yet, this mode of operation is not scalable. And, as members of the Election Integrity Partnership observed, users quickly resorted to sharing screenshots or copy-pasting the content to spread the message – in effect, sharding the message and creating a long-tail of messages that platforms need to act against.
Research from organisations like Avaaz and Tow Center expose shortcomings in Facebook’s current mechanisms to identify or label posts containing false information that had already been debunked, in English. And further data from EIP indicate that these mechanisms are even less effective in non-English languages. While there aren’t platform-wise breakdowns for Indian language content, information from Google shared in late 2019 suggested most new users consumed non-English content. This makes the effectiveness of platform mechanisms across different languages an important aspect for India.
Whether effective or not, as platforms have adopted a more interventionist approach, disinformation has shifted. In addition to the predictable moves to closed-messaging apps like Telegram, or more “friendly” platforms such as Parler and Gab – there was also a shift to older modes of communication where there is little to no moderation. such as text messages, robocalls (automated calls) and email. Granted, some of these methods do not have the same reach as platform affordances allow them to have. Nevertheless, they do possess a mobilising or intimidation quality that has the potential to be misused.
Twitter, Facebook, Instagram planned to employ “pre-bunks” – in an attempt to contextualise potential disinformation before it took roots. Research suggests that such a mechanism would work better with logical information rather than factual information.
As Joan Donovan asserts that disinformation is local. This is relevant as localised and specific false claims can effectively side-step prebunks since they require additional validation.
And while local and specific claims can be fact-checked, they require increased effort to be expended and time to be spent in doing so, in which time there is potential for them to be spread widely especially if superspreaders get involved. Therefore it is important to have information about local authoritative information sources (people or resources) ready and build localised fact-checking capabilities.
A deeper understanding and more pressure is required
What we should be wary of is picking up research or lessons from a western context and force-fitting it to the Indian context. In a recent editorial, Irene Pasquetto, suggested that we try to understand those spaces where disinformation has already had effects rather than chasing of a binary of whether it is harmful as a whole or not. For this, the information ecosystem in India needs more research and study in the local context, in multiple languages and different modes (text, audio, video). This requires more sponsorship of such research, better systemic incentives for researchers and greater intent or collaboration from platforms. This is not to say that none of this exists, it is that we need more.
Another notable aspect has been the emergence of coalitions such as the Election Integrity Partnership and The Real Facebook Oversight Board. Both groups appeared to take a different approach over how they worked with platforms (collaborative v/s adversarial). Though, they were ultimately able to create a degree of pressure on platforms – either directly via their research outputs or through the coverage their work and statements received. While it remains to be seen if these coalitions can evolve from tactical to strategic goals, there is merit in exploring such coalitions locally, at least tactically, to begin with.
Prateek Waghre is a research analyst at The Takshashila Insitution. He writes MisDisMal-Information, a newsletter on the information ecosystem in India