header
Tech

Amidst Calls for a Ban, India Leads the Debate on Lethal Autonomous Weapons

At a decisive meeting on the future of LAWS, countries such as Pakistan and Cuba have called for a pre-emptive ban, while others like US, Germany and Russia disagree.

Geneva: “There is no such thing as an ‘ethical robot’. There won’t be one tomorrow or next week or 50 years from now. Robots are not moral beings, and therefore, the ethical responsibility must always lie with humans,” said Margaret Boden, research professor of cognitive science at the University of Sussex.

Boden, who has been working at the intersection of computing and philosophy since the early 1960s, was speaking to the Group of Governmental Experts (GGE) gathered at the United Nations office in Geneva to discuss the future of lethal autonomous weapons (LAWS).

This week, academics, non-governmental organisations and representatives of over 80 governments gathered at Palais des Nations for a decisive meeting on the future of LAWS. Organised under the Convention on Certain Conventional Weapons (CCW), the meeting was chaired by Amandeep Gill, permanent representative of India to the Conference on Disarmament.

While no formal definition of LAWS exists, they are generally understood as weapons that once activated, can select and engage targets without further human intervention. Nations are still divided on whether this includes existing semi-autonomous weapons or if it refers only to sophisticated AI weapons that will be developed in the future.

Amidst concerns about the increasing militarisation of cyberspace and the potential of technology to disrupt democratic decision-making, the conversation around LAWS is emblematic of a larger technology-driven insecurity plaguing nations.

For countries that are hard at work nurturing the integration of technology into their domestic economies, the weaponisation of artificial intelligence represents yet another chasm that will require significant resources and immense R&D to overcome. Countries that are relatively ahead in the game are concerned with retaining their strategic advantage while not inadvertently kick-starting another global arms race. A loose coalition of technologists, academics and non-governmental organisations, gathered under the ominous sounding ‘Campaign to Stop Killer Robots‘ has instead cited the inadequacy of protections under international humanitarian law and the trigger-happy tendencies of technologically advanced nations to call for a pre-emptive ban on autonomous weapons.

The campaign is joined by a slowly growing number of nations (22 at the time of going to print) including Pakistan, Cuba and the Holy See in this call for a preventive prohibition. There are some that consider a ban premature, given an inadequate understanding of the technology. Many more, however, are convinced that the time to take some concrete measures has arrived. Brazil while making its initial interventions at the meeting noted that enough critical mass now exists to conceive either a legal or political instrument to regulate autonomous weapons.

The GGE is the first formal proceeding on LAWS that comes after three rounds of informal meetings of experts on the issue. In these meetings (as at the GGE), a significant hindrance has been the lack of a working definition of what LAWS are.

Other countries, primarily ones that have developed and deployed weapons with semi-autonomous capabilities, have refused to endorse a ban. The US, that recently launched the ‘Sea Hunter‘, an autonomous submarine capable of operating at sea for months on its own, clarified that it will continue to promote innovation while keeping safety at the forefront. Similarly, Germany which has been fielding the automated NBS Mantis gun for forward base protection, called a ban premature. Russia echoed this position, warning against alarmist approaches that were “cerebral and detached from reality”.

India for its part, advised for balancing the lethality of these weapons with military necessity – adopting a wait-and-watch approach to how the conversation evolves.

A picture of the proceedings at the the Palais des Nations. Credit: Bedavyasa Mohanty

A picture of the proceedings at the the Palais des Nations. Credit: Bedavyasa Mohanty

For the large part, conversations at the GGE revolved around the ethical, moral and legal principles associated with using LAWS. Whether these can be balanced with military necessity and security concerns, remains undecided.

Many AI experts gathered at the meeting seemed to share the notion that the threat associated with uncontrollable LAWS is far more severe than the possible benefits of more accurate targeting that may reduce loss of civilian casualties. One expert called LAWS the next weapons of mass destruction, owing to the ability of a single human operator to launch a disproportionately large number of lethal weapons. A video, depicting autonomous explosives-carrying microdrones, wreaking havoc was screened at a side event organised by the Campaign to Stop Killer Robots.

The video, produced by Stuart Russell of the Future of Life Institute, has been criticised by others in the scientific community for sensationalism – that screening the video at a gathering whose mandate is to separate fact from apocalyptic fiction, is unhelpful.

Amidst the two ends of the spectrum, the CCW has managed to move the debate forward on issues relating to the use of autonomous weapons. The fact that a minimum amount of human control be retained and that the use of these systems be governed by IHL.

The question of human control which has been discussed at length both at the GGE and in conversations leading up to it, has concluded that at the bare minimum, human must retain operational control over these weapons – for instance, the ability to cancel an attack on realising that civilian lives may be endangered. However, the particulars remain elusive, due to the lack of uniformity and specificity of language used. While many countries agree on the need for ‘meaningful human control,’ few have offered clarifications on what ‘meaningful control entails.’ In an attempt to de-mystify these understandings, the US has offered, ‘appropriate level of human judgement over the use of force’ as a more accurate method of framing the issue.

Nonetheless, many issues remain unresolved even at the conclusion of the GGE. Technical questions around the operational risks associated with LAWS remains unanswered. Will technologically sophisticated weapons be vulnerable to cyber-attacks that can hijack control? How will deployment of LAWS change the strategic balance between nations? Are weapons review processes under Article 36 of Additional Protocol I of the Geneva Conventions, adequate to ensure that LAWS are compliant with the international humanitarian law? These and many other questions were highlighted by the Chair’s report, and remain to be resolved by the next iteration of the GGE in 2018.

An effective way of going about these issues is perhaps by discerning what the international community finds unacceptable about the use of LAWS. Moving beyond the legally mandated principles of distinction, proportionality and precaution and identifying the extent to which the conduct of warfare can be devolved to machines is central to making progress in these debates.

As the chair, Gill put it, the distance between the attacker and the target has been increasing since the beginning of time. Have we finally arrived at a point where that distance is unacceptable?

Bedavyasa Mohanty is an Associate Fellow at the Observer Research Foundation, New Delhi. He attended the proceedings of the GGE as an independent observer and opinions expressed above are his own.