Tech

Cybersecurity, a Horse With No Name

When asked about the origins of The America‘s hit single ‘A horse with no name’ (1971), lyricist Dewey Bunnell said he wanted to capture the spirit of the hot, all-too familiarly dry desert around Arizona and California, which he’d drive through as a kid. The horse was a vehicle that’d take him through the desert and away from the chaos of life.

Cybersecurity sounds like it could be that horse, in the form of IT infrastructure to effectively defend against the desert of cyber-weaponry, except we’ve probably only just seen a foal. When software is weaponised and used in cyber-attacks, we’re confronted with a threat we’ve not fully understood yet and which we’re in no real position to understand, let alone effectively defend against. At the same time, even in this inchoate form, cyber-weapons are posing threats that we better defend against or risk the shutdown of critical services. The only clear way forward seems to be of survival, on an ad hoc basis. Not surprisingly, the key to understanding cybersecurity’s various challenges for its innumerable stakeholders lies in knowing what a cyber-weapon, a peril of the desert, is.

We don’t know.

At least, not before one has been used. Since the early years of this decade, stakeholders in the debate on cybersecurity – from lawmakers and security analysts to academic researchers and the people – have been able to arrive at the contours of a consensus on how governments and civil society groups can engage and devise mechanisms to protect state assets from cyber-weapon assaults. But progress has been stalled by, among other things, the absence of a clear definition of what makes a cyber-weapon. The irony is that it’s like a zero-day hack turned on itself.

At the CyFy 2015 conference organised by the Observer Research Foundation in New Delhi, during October 14-16, multiple preoccupations of the cyber-everything community that’d been invited to participate were debated. It was apparent from the proceedings that the global response to challenges posed by cybersecurity were in the form of same-old processes and protocols being served up clothed in the improved efficiency that new technologies brought with them, even as the ‘way of doing business’ remained largely unchanged. Part of the reason for this slow and cautious pace of adoption is the incredible susceptibility of software to become weapons of surveillance and intrusion.

Speaking at the conference, Christopher Painter, the US Department of State Coordinator for Cyber Issues, made a clever joke about how two Wifi-enabled refrigerators being used to hack businesses brought a whole new meaning to the phrase “freezing your assets”. The joke is funny because it’s true. Cyber-weapons, in the form of weaponised software, are so dangerous because they’re very malleable: the many costs of repurposing a useful computer program into a tool of mass virtual-destruction are orders of magnitude lower than what it takes to turn reactor-grade uranium into weapons-grade uranium. In other words, the militarisation of software is easier to achieve and cyber-weapons are easier to create – and the defence against them involves as many stakeholders as it does because of this pliant quality, which makes it adaptable.

The extant defence against such weapons has been either pronouncedly reactive, like with the Sony Pictures hack in November 2014, or discreetly overgeneralised, as with the Great Firewall. This is mostly because we’re unable to define what makes such a weapon and then come up with sufficient generalisations, which are important because they allow lawmakers as well as engineers to set and work on precedents. For example, Article 2 of the Chemical Weapons Convention opens thus:

1.”Chemical Weapons” means the following, together or separately:

(a) Toxic chemicals and their precursors, except where intended for purposes not prohibited under this Convention, as long as the types and quantities are consistent with such purposes;

(b) Munitions and devices, specifically designed to cause death or other harm through the toxic properties of those toxic chemicals specified in subparagraph (a), which would be released as a result of the employment of such munitions and devices;

(c) Any equipment specifically designed for use directly in connection with the employment of munitions and devices specified in subparagraph (b).

This definition relies on the immutability of a chemical substance or an instrument designed to work with chemical substances. An equivalence doesn’t exist in software – which can with relative ease switch back and forth between harmless and harmful forms. Chemical immutability allows inspectors to treat all substances fitting the CWC definition in a certain way, and work together to identify and destroy chemical weapons. This isn’t possible with cyber-weapons because they often tend to be dual-use, accidentally or by design – a case in point is the controversy surrounding the Wassenaar Arrangement. It’s not always possible to say if attribute X (like a zero-day hack) that weaponises algorithm P (like an operating system) will also weaponise algorithm Q, or attribute Y that weaponises Q will weaponise P – an infamous example is the Stuxnet virus, which selectively targeted modern SCADA systems to bring down Iranian nuclear enrichment facilities.

The inability to ensure that a piece of software implemented in a particular sector in a particular country is not weaponisable also highlights our dependence on conventional approaches to securing offensive material. As in the previous example, the Organisation for the Prohibition of Chemical Weapons ensures that stockpiles are located, documented and destroyed before they can be used. But because we can’t ensure that all pieces of software have been secured before being deployed, we could pivot and look at downstream protection. In other words, would it be more useful in the short-run to have a ‘code of conduct’ that dictates how various affected parties ought to secure against, recuperate from and/or retaliate against cyber-attacks?

Such a code could be particularly useful for the private sector. Angela McKay, the Director of Cybersecurity Policy and Strategy at Microsoft, called the private sector the cybersecurity battlefield because it’s “often the target of attacks because they can stay below the level of conflict, the weapons that people are shooting at each other through, and is often what’s left to clean up” the mess. These expectations exist in India as well: aside from the Pakistani and Chinese incursions that have been alleged to happen into Indian cyberspace, the government also hoists responsibilities on telecom service providers that can’t always be met. Vikram Tiwathia, the Deputy Director General of the Cellular Operators Association of India, also expressed hope that government and private sector interests would be better aligned over the infrastructural aspects of cybersecurity and wished for better support.

So a ‘code of conduct’ for this sector would be a means to mitigate their losses and guide downstream response rather than trying to tackle upstream initiatives, and building capabilities up from there. This could be in the form of preparing companies to cope with attacks better, including installing emergency back-ups, drawing up best practices for IT administrators to ensure systems are safe and launching probes to track down probable sources and causes of the attack, and dictating how companies and the government can work together moving forward.

Simultaneously, governments and civil society groups could focus on building incentives for collective action and majoritarian heft for compliance-monitoring, especially because a code of conduct only warrants passive compliance and does nothing to preserve trust in the face of threats like proxy-territoriality. For example, if an independently acting group of hackers in China strike against a bank in the US, can the US blame the state of China for the attack? The onus is on the source-nation to tame its domestic players and keep them from acting out, yet what can a government do short of implementing absolute digital sovereignty and prompting a fragmentation of the World Wide Web? And if the US can’t, does it mean the American government can act unilaterally against the group of hackers by launching a retaliatory attack into Chinese cyberspace?

At the heart of the new cyber-weapons problem is the forced confrontation of our historically most effective forms of defence with their own limitations, some that we didn’t know existed. These weapons are reshaping ideas of weapons-manufacturing, dual-use technology, culpability, territoriality, even policymaking, and proving it’s futile to treat with them the way we do with conventional weapons (even WMDs). It sounds deeply troublesome in many ways – one of which is that we might never be able to come up with an integrated shield against this threat even as it’s the necessary ‘dark side’ of the new digital world we’re exploring, with millions of economic, social and political opportunities.

The Wire was an official media partner of ORF for the conference.