Digital

Beyond Twitter and Russia: How Do We Make Social Media Incorporated Work For Democracy?

Instead of attempting to regulate these companies through external mechanisms, we need to ride the wave of Big Tech awareness to demand transparency and accountability.

The conception of social media platforms as genuinely neutral forums has always been a misnomer.

World history is a saga of the manipulation of man-made innovation by entrenched structures of power. From the printing press to the telegraph to cable television, ‘technologies of liberation’ have disrupted status quo only to rapidly become weapons that aid the next cycle of societal oppression.

Social media platforms and the oligarchy of Big Tech now finds itself at this critical juncture as the Economist’s recent cover story put it, ‘Do social media threaten democracy?’.

Big tech and the 2016 US elections

Social media’s downward spiral, while in the offing for some time, was most recently apparent in the recent spurt in public awareness surrounding the role these platforms played in the alleged Russian interference in the 2016 US presidential elections. Questioning executives from the ‘Big Tech Triumvirate’ of  Twitter, Google and Facebook during the hearing conducted by the House Permanent Select Committee on Intelligence, Chairman Richard Burr concluded that Russia used bots and ‘fake news’ to conduct an information operation intended to divide society on issues of race, immigration and gun-control. Vice-chairman Warner warned that 80,000 Russian backed Facebook posts had reached 126 million Americans during the election. Further, he observed that 15% of all Twitter accounts are fake or automated and Twitter itself confessed that there were upto 1.4 million tweets generated by Russian-linked accounts in the build up to the November election. All three companies being interrogated broadly accepted the complicity of social media platforms in Russia’s misinformation strategy and pledged to devise regulatory safeguards that could prevent similar abuse of these platforms in the future

In the build-up to these hearings, Twitter made a public statement, banning advertising from Russia Times and Sputnik, after revelations that these Russian news agencies had sought to interfere in the US elections on behalf of the Russian government. Editor-in-chief of Russia Times, Margarita Simonyan, hit back by stating that Twitter had in fact approached the Russian news agency for investment in advertising around the US elections. She further stated that Twitter’s actions amounted an assault on free speech and an open admission of the fact that Twitter is ‘under the control of the US security services’.

Simonyan’s comments pose perhaps an interesting question on the obligation of social media platforms to remain neutral on geo-political issues. The argument that Big Tech companies most commonly fall back on is that their self-imposed business and technology limitations restrict their services from interfering at an individual level. 

The back-and-forth between Louisiana junior senator John Kennedy and Facebook’s general counsel Colin Stretch during the recent Senate hearings is particularly illuminating in this regard. Kennedy and Stretch, as the video below shows, are engaged in a brief debate over whether Facebook has the ability to look up data on any one individual.

As technology writer Ben Thompson notes, “what Kennedy surely realised – and what Stretch, apparently, did not – is that Facebook had already effectively answered Kennedy’s question: the very act of investigating the accounts used by Russian intelligence entailed doing the sort of sleuthing that Kennedy wanted Stretch to say was possible. Facebook dived deep into an account by choice, came to understand everything about it, and then shut it down and delivered the results to Congress. It follows that Facebook could – not would, but could – do that to Senator Graham or anyone else”.

Indeed, the conception of social media platforms as genuinely neutral forums has always been a misnomer. These platforms have ingrained in their very design the possibility of exploitation – either by the corporations that own them or by entities that have the power to manipulate them. The first step to establishing a coherent policy framework regulating the information explosion that these platforms enable, and harnessing their true potential, is to recognise this crucial fact.

Architecture of algorithmic control

In their iconic 1988 book Manufacturing Consent, Noam Chomsky and Edward Herman argue that the mass communication media function as effective political and ideological institutions through a dangerous concoction of reliance on market force, internalised assumptions and self-censorship-which are driven by the ‘large bureaucracies of the powerful’.

These actors gain access to the news – which includes both the state and private actors – through ownership, advertising and political power. The ‘manufacturing of consent’ is therefore the understanding that the techniques of control driven by existing structural inequalities in mass communication enable indoctrination of the masses by necessarily imposing illusions of the truth. Social media platforms function on the same basic principle. The modern social media age is driven by algorithms that are designed to serve those who can either pay for it or have the power to control them.

Social media platforms and Big Tech are here to stay. Credit: Flickr

Indeed, attempts to regulate social media platforms externally fail to recognise that the misinformation on these forums are the consequence of structural problems in the architecture of these platforms themselves. The key problem lies in the algorithms that determine what an individual views on these platforms – which are designed to bolster revenue over accuracy. We must remember that these giants are corporations, not altruistic missions.

As stated by John Wheeler, visiting Fellow at The Brookings Institution, the objective of these algorithms is to maximise user-attention, which, in turn, expands the site’s possibilities of attracting profitable advertising. Thus, algorithms are designed to generate content the user likes. Bringing together users who like similar content is hardly the fabric of a truly pluralist democracy as the algorithm forces the individual into an artificially designed echo-chamber, where his ideological development is a product of entrenched confirmation bias, not genuine engagement. The data analytics firm Gnip (which has been bought by Twitter since) found that out of the 11.5 million tweets generated during the 2011-12 Israeli-Palestinian clash, only 10% fostered any form of dialogue between opposing sides. Similar findings  have been made regarding the fake news fuelled polarised discourse during the Basirhat riots in West Bengal earlier this year. The Wall Street Journal initiated a project last year known as ‘Blue Feed, Red Feed’, which basically analysed the streams of news sources accessed by liberal and conservative users. The conclusions point to two entirely alternate realities. The divergence is explained both by the exposure to questionable content and the algorithmic design that preserves an intellectual bubble.

The latest export of the neo-liberal ethos

Much like platforms, the search engine industry plays a crucial role in the preservation of transnational capitalism. Google no longer operates simply as an information retriever but takes on a more active role as a knowledge operator. By operating multiple products, Google manipulates the nature of information we can access through similar algorithmic processes by filtering content on the basis of an individual’s taste and preferences and rigs searches to prioritise its own products or those of the highest bidder over others.

Rather than realising the democratising vision of the Californian counterculture in which it has its origins, Big Tech, through a combined espousal of personal liberty and market deregulation, veered away fast from any notion of neutrality. By convincing the state to privatise and deregulate the internet, which is intrinsically a public good, Big Tech merely became another episode in the global neo-liberal wildfire. Much like the architects of the neo-liberal project before them, the pioneers of Big Tech touted openness, connectivity and deregulation as their intrinsic assets and got rich while preaching these values, without considering the social costs of their plunder.

A Facebook logo is pictured at the Frankfurt Motor Show (IAA) in Frankfurt, Germany, September 16, 2017. Credit: Reuters/Ralph Orlowski

Modes of regulation

This brings us to the question of the extent to which the state should intervene. Thus far, state intervention in the social media space has generally been to further the ends of the government in power through the spreading of fake news and vitriol and for the suppression of genuine dissent. Turkish censorship of Facebook pages operated by the Kurdish minority, the clampdown on social media platforms in the aftermath of the Arab Spring or the censorship that was legitimised by the now repealed draconian Section 66A of the Information Technology Act in India are cases in point. The state has a vested interest in using these platforms to win elections in democracies or manifest authoritarianism in autocracies. Excessive state regulation of social media hampers the realisation of its true potential and becomes the modus operandi of oppressive censorship.

Unlike states, however, Big Tech runs on profits, which in turn, depends to a certain degree, on consumer acceptability. Therefore, the recent spate of public awareness regarding the nefarious ways of the social network is probably the greatest hope for its democratisation. This, combined with corporate and competition law developments in the European Union that prevent the further monopolisation of digital spaces by these corporate giants, are positive as it compels them to compete in a slightly fairer marketplace and thus take the consumer into cognizance when framing business strategy. By taking a pro-consumer stance against surveillance in the privacy landscape and by indicating its willingness to prevent its misuse in the aftermath of the Russian elections, Big Tech has postured that it could self-regulate its structures, if that would aid its profits.

In this context, the ambitious German gambit to regulate the spread of fake news and hate speech online possibly has the right intentions. It casts an onus on social media platforms to remove, within 24 hours of receiving a user complaint to delete content that is ‘manifestly unlawful’, although the term ‘manifestly’ has not been defined in the law. Persistent failures to delete content or respond to the complainant on how the complaint was handled will result in massive fines as per the law. As the draft of the bill made its way through the German parliament, it did include within its ambit content that posed a “great danger for the peaceful coexistence of a free, open and democratic society and highlighted “experiences in the US election campaign,” including ‘fake news’, the dissemination of which may not necessarily be unlawful. Judicial review is only mentioned when the government imposes fines on the platforms themselves. The problem with this mechanism, as has been identified by free speech activists and the UN Special Rapporteur David Kayes, is that the costs of under-censorship greatly outweigh the benefits of over-censorship. Thus, rather than providing due process to the individual whose legal content gets taken down, platforms now have a financial incentive to take down posts without any requirement of due process in that regard.


Also read: After the ‘Facebook Files’, the Social Media Giant Must Be More Transparent


Indian law on the matter is even more problematic as it places excessive reliance on state institutions. Section 66A of the Information Technology Act that made it a criminal offense to share content that has a ‘grossly offensive or menacing character’ or ‘is known to be untrue’ was struck down by the Supreme Court on grounds of being unconstitutional. However, Section 69A, which bestows upon law enforcement agencies the “power to issue directions for blocking for public access of any information through any computer resource” still exists. The court has read down the Section, stating that the intermediary would require a court/government order to pull down content. Given the nature of government-corporate nexus that entrenches manufactured consent, this is certainly not a fair or adequate mechanism.

Public Interest Algorithms

Instead of attempting to regulate them externally, it is important that we ride this wave of awareness to demand transparency and accountability in the way these platforms function and compel them to deliver socially optimal results.

“All platforms using algorithms for content distribution must be compelled to develop a standard ‘Public Interest Algorithms’ or Application Programming Interface (API)”. Credit: Reuters

Wael Ghonim and Jack Rashbass have argued that this must come about in three ways. First, platforms must publish all data related to public posts so that the consumer is made aware of reach – both geographic and demographic. They must also disclose how certain stories started ‘trending’ and achieved a ‘viral’ status.

The second prong, which has been agreed to by most platforms at the hearing, is the publication of all data related to advertising so the consumer knows the bias involved.

Finally, all social media platforms have policies through which they regulate contact – although the enforcement of this framework may be ad hoc. Facebook’s post entitled ‘Hard Questions: How we counter terrorism’ is a step in the right direction and identifies broadly its content regulation policy, although it does not elaborate on the parameters of censorship that outline clearly why a specific post may have been taken down and the procedural aspects, the nature of delegation and supervisory mechanisms involved.

Even if these policies are implemented, it is not possible for humans to keep pace with the existence and dissemination of social media posts or the speed with which existing algorithms make content distribution decisions. Therefore, Ghonim and Rashbass suggest that all platforms using algorithms for content distribution must be compelled to develop a standard ‘Public Interest Algorithms’ or Application Programming Interface (API) which captures the relevant inputs and outputs of the algorithm being used by the platform. An API can do exactly what the content distribution algorithm can – monitor inputs and outputs at computer speed. By making the data public through means such as having it as an application linked to the platform itself, it enables the public to access information and thereby, breakdown the illusions of manufactured consent.

Harnessing platform potential

Despite the pessimism, social media has the capacity to collectivise, disrupt and aid in ways never imagined before. These effects have already been felt in various parts of the globe. Its role in disaster relief work, social movements or as an avenue for genuine engagement, principally, at least must not be underestimated. By recognising the inherent flaws in the structure of Big Tech, we lay the platform for creating regulatory mechanisms that make them work towards the creation of a fairer global system. These mechanisms are predicated on the power and autonomy of the individual to reclaim digital spaces in the manner contemplated by the counterculture of the San Francisco Bay.

Social media platforms and Big Tech are here to stay. Through public engagement, its degeneration into a mere tool of modern empire can be curbed. We have the opportunity now to reclaim and re-engineer one of mankind’s greatest inventions. Let’s not waste it.

Arindrajit Basu is pursuing a Masters in Public International Law at the University of Cambridge.