The US Department of State agreed on March 1 to renegotiate the terms of an international agreement that were found to severely impinge on software development, signalling a victory for cybersecurity researchers.
In 1996, 41 countries signed the Wassenaar Arrangement to regulate the trade of Cold-War-era relics that could be used for civilian and military purposes both – the so-called ‘dual-use technologies’ – such as radar systems and fissile substances. In December 2013, the arrangement was amended to include ‘intrusion software’, digital tools that could be used for surveillance (in violation of a growing international consensus that privacy is a fundamental human right).
As a result, signatory states were forbidden from exporting digital assets that were susceptible to being used for spying, etc., to non-signatory states. The export-control regime was well-intentioned because it limited the ability of NATO-based organisations to support oppressive regimes worldwide. But researchers quickly realised that their ability to respond to coordinated security threats had also been compromised.
The core issue was encapsulated by Sergey Bratus, a computer scientist at Dartmouth College, New Hampshire, as an inability of the Wassenaar Arrangement to “describe a technical capability in an intent-neutral way”. In other words, the document hadn’t been equipped to define what exactly would or wouldn’t violate the export controls; the resulting vagueness often swept up both legitimate and illegitimate activities together. Three illustrative examples follow.
Neither India nor Pakistan have access to advanced surveillance tools made in a country that’s part of the Wassenaar Arrangement, even should they be able to demonstrate a rightful need (such as for use against terrorists and insurgents). Then, in July 2015, it was revealed that the Indian and Pakistani governments had invited offers from a shady Italian company with known links to the governments of Saudi Arabia and Ethiopia; Italy is one of the signatories of the arrangement.
Second: Cyber-policy expert Katie Moussouris has spoken against the arrangement’s choosing to treat with cybersecurity exceptions and exceptions for physical assets in the same spirit. An oft-cited example is the Heartbleed bug. In early 2014, it had infected hundreds of thousands of servers worldwide; its ‘effect’ was to force a server to divulge the contents of encrypted messages it was relaying. A solution finally emerged in the form of a patch that, when installed, allowed the server to ignore the bug’s malignant commands. However, had the patch been developed in a Wassenaar-compliant state such as the US, say, distributing it to systems around the world would’ve required repeated exemptions from the Department of State. And this would’ve delayed what should ideally have been a quick and effective response.
Third: Zero-day exploits are a form of defects deliberately inserted into functional programs. Their job is to make the program fail at a particular time but until then remain undetectable; they get their name from the latter attribute. As it happens, academic researchers as well as professional developers often use the same techniques to build zero-day exploits as they do to investigate how programs are threatened by licit vulnerabilities in the ‘wild’. However, this dual-use nature means a common tool of vulnerability research and innovation is subjected to the terms of the Wassenaar Arrangement and inhibits their free use.
In the US, the Department of State had been engaged in a months-long standoff with the Department of Commerce and the Department of Homeland Security over how these complaints should be resolved – with its position being that it wouldn’t submit to renegotiating the text of the arrangement itself and that disputes would have to be resolved domestically. For cybersecurity researchers, resolving the standoff was the final hurdle before they could expect lawmakers to push for redrafting the amendments at a meeting of the signatories slated for December 2016.
Origins of the State Department’s relenting could be traced to May 2015, when the Bureau of Industry Standards and Security invited comments on its draft rules for implementing the terms of the amendment. It received strong pushback from cybersecurity experts and Silicon Valley, many of whom were concerned that the rules were more vague than those being introduced in Europe. The Electronic Frontier Foundation even cautioned that they would violate the First Amendment.
Then, following multiple rounds of discussions and much lobbying, matters reached a head when Congressmen James Langevin (D-Rhode Island) and Michael McCaul (R-Texas), armed with a letter signed by 125 of their peers, petitioned National Security Adviser Susan Rice to interfere in the matter and help “greatly narrow the range of affected technologies”.
Finally, in January 2016, following a joint hearing before two House subcommittees, Representative John Ratcliffe (R-Texas) told FCW, “If we are to expect the cybersecurity provisions of this arrangement to be workable, we need to make sure our stated intentions and actions are not contradictory. If we can’t do that, I question why we as a country are agreeing to this updated arrangement.”
The Department of State’s reconsidering comes just in time as well because proposals on implementing the arrangement were due this month.