Tech

WhatsApp told India That Tracing Fake News Would Break Encryption. Is This True?

By using metadata and deploying human content moderation, WhatsApp could stop fake news, remove misinformation and even punish bad actors – all this without breaking end-to-end encryption.

WhatsApp claims that because it is an end-to-end encrypted network, there’s little it can do to stop the spread of fake news. Most recently, it has denied requests by the Indian government to build an infrastructure to build traceability of fake messages, claiming that doing so would break end-to-end encryption.

The government has been considering regulations that would require social media platforms and instant messaging apps such as WhatsApp to manage the fake news on their platforms themselves, including tracing the origin of malicious messages.

There is enough evidence to show that by using metadata and deploying human content moderation, WhatsApp could slow down the spread of fake news, remove misinformation and even punish bad actors – all this without breaking end-to-end encryption. A detailed analysis of ours, published in the Columbia Journalism Review, examines this in depth. (Disclosure: one of the authors of the paper, Himanshu Gupta, worked with Tencent’s WeChat previously, which is a competitor to WhatsApp globally.)

Here, we provide a brief summary and the implications of our analysis for the Indian government-WhatsApp tussle.

In our CJR paper, we provide evidence to establish that although WhatsApp can’t read the contents of messages, it reads and stores parts of the metadata of every message being sent on its platform. The metadata in this case can be some details which aren’t pertaining to the actual content of the message itself, such as its time of send, IP address etc, which aren’t end-to-end encrypted and remain available to WhatsApp.

Even though WhatsApp claims in its privacy policy that it deletes messages from its servers once they’re delivered, they have admitted in a Delhi high court affidavit that they actually store popular photos and videos on their servers for longer durations to enable faster file transfers for better user experience. Another experiment confirms that WhatsApp stores data on its servers long after they are downloaded, or deleted from the local handset device by the original chat participants.

As detailed in WhatsApp’s encryption security paper, the quick forwarding of attachments feature of WhatsApp works by using a combination of caching of files on its servers, and WhatsApp being able to access and store metadata for each of them.

As our paper notes:

“WhatsApp’s encryption security paper states that WhatsApp uniquely identifies each attachment with a cryptographic hash (a cryptographic text that is unique for each file) and whenever a downloaded attachment is being “forwarded,” WhatsApp checks if a file with the same cryptographic hash already exists on its server. In case the answer is yes, WhatsApp does not upload the file from the user’s phone to the server, and instead sends a copy of the file stored on its server directly to the final recipient. This implementation, while improving the user experience by improving the speed of the file transfer and saving Internet bandwidth of the end-user, also demonstrates that WhatsApp can point to specific files residing on its servers despite the end-to-end encryption. Hence, it has the capability to track a specific piece of content on its platform even if it does not know what is the actual content inside that message due to end-to-end encryption.”

With the above capabilities, WhatsApp can also potentially determine who sent a particular file to whom. In short, it can track a message’s journey on its platform (and thereby, fake news) and identify the originator of that message.

However, just having files on its servers and their metadata isn’t enough for WhatsApp to identify fake news, as it doesn’t actually know what is the content inside those files and messages, which is how things stand today.

To solve this, we suggest that WhatsApp set-up a content moderation system, managed by actual human moderators, to which users can “report” suspected messages. This act of “reporting” is just “forwarding” the suspected fake message contents to WhatsApp’s content moderation system, thereby converting the encrypted message to a plain media or text, something the moderation teams can easily fact check. If these moderators after due fact checking ascertain that a particular message is indeed fake, and since as detailed above, WhatsApp can identify that particular message’s metadata precisely, it can tag that message throughout its platform as “fake news”. It can be argued that WhatsApp can also, with some tweaks to its algorithm, identify the original sender of a fake news image or video, and potentially stop that content from further spreading on its network.

Facebook, which is the owner of WhatsApp, is already performing fake news moderation on its platform in the wake of several investigations into its role in US election meddling. It can use the existing infrastructure of its 20,000 human content moderators, and build on Facebook’s existing practice of using a network of accredited external fact checkers, to manage the content moderation on WhatsApp.

The tussle between moderating fake news and protecting privacy

Our paper effectively demonstrates that WhatsApp can tag a reported message as “fake” post its fact-check, “block” it from being forwarded further, and even potentially trace its origin. All of the above can happen without breaking end-to-encryption, which means no third party or government can or should ever snoop on any of the conversations.

In this design, even WhatsApp itself can only get a copy of the messages which have been explicitly “reported” by users. From privacy point of view, this is, in effect, somewhat similar to a user taking a screenshot of the message and sharing it outside of WhatsApp.

Considering the severity of the fake news crisis, it is vital that WhatsApp does not remain a plain uninvolved observer. To stop fake news, we recommend that WhatsApp should build this infrastructure, but the government shouldn’t have any access to it.

While WhatsApp has added a few technical tweaks in the past one year to stop fake news, none of its efforts involve moderating actual content, and hence would always fall short.

However, we do understand that adding the feature of “tracing the origin of fake news” can have problematic implications at least hypothetically. It could, for instance, open a can of worms if misused. A state dissident who has written a popular anti-establishment message on WhatsApp can potentially be traced, which is tantamount to censorship.

Therefore we would suggest WhatsApp to self-police fake news itself, and the government to stay out of it. In absence of WhatsApp taking any material actions, the government is more likely to implement overreaching regulations or even possibly block the platform itself, which would end up hurting both the platform and the users.

Himanshu Gupta and Harsh Taneja are employed respectively as the Head of Growth at Capital Float owned Walnut, a financial technology startup in India and Assistant Professor in the College of Media (Advertising) at University of Illinois Urbana-Champaign.

Himanshu led the India marketing & strategy for WeChat, Tencent’s hit Messaging app in Asia, between 2013-2015. Harsh’s research focuses on how social, commercial and technological factors together shape digital media use.

Join The Discussion