The explosion of the SpaceX Falcon 9 rocket at its Cape Canaveral launch pad on September 1 and the crash of Tesla’s driverless car in June are powerful reminders that while technology driven by artificial intelligence (AI) is advancing at a dizzying pace, regulators cannot lag behind.
Although self-driving cars have not yet hit the Indian roads, app-based taxi aggregators such as Uber and Ola do operate in the country. We have all experienced the many benefits of AI – email spam filtering, web page translations and Facebook’s auto recognition to tag friends and family in a photograph. A number of AI startups in India are helping people book cabs, order food, pay bills and recharge mobile phones. But, while US federal regulators are readying draft regulations on driverless vehicles, Indian law has lagged behind, with the ethical and regulatory implications of AI still unaddressed.
AI has made great progress with developments in cloud-based infrastructure, which allow machines to learn from the data that analyse. An AI technique called ‘deep learning’ enables systems to learn by processing huge numbers of examples contained in vast numbers of documents instead of being programmed. However, as indicated by the Tesla car crash that occurred when a tractor-trailer made a left turn in front of the car, which failed to apply the brakes, AI systems may malfunction and human error can occur. Lawmakers in India will need to address the question of who will be liable in such cases. For instance, the ‘autopilot’ feature in the ‘self-driving’ Tesla car was reportedly intended to assist the driver, not replace him. Therefore, if an accident occurs, is the driver solely liable or will he share liability with the car manufacturer? Similarly, in a case that has already occurred in India, is an AI chatbot or app provider exempt from the liability of an assault on a woman by a taxi driver on the grounds that the car company is only an ‘aggregator’? Will the company providing an AI chatbot that offers advice regarding medical solutions be exempt from liability of medical negligence?
In the common law world, there is already concern that AI systems will elude traditional legal concepts such as ‘foreseeability’ in tort law. In order for a person to be liable in negligence, the harm that occurred had to be “reasonably foreseeable”. AI systems, however, are designed to be creative and to keep learning from the data analysed. Therefore, these may act in ways that are not reasonably foreseeable by the system designers. While tort law concepts may need to evolve, an even greater problem in India is that tort law is practically unenforceable. Since civil suits lie pending for decades, there is no great concern in civil society regarding liability for negligence. Since tort cases have not been regularly heard, there is hardly any case law either. There is no jurisprudence in India on product liability and no common law tort of wrongful death. Instead, case law has developed outside the civil courts in the consumer fora for provision of defective goods and services under the Consumer Protection Act, 1986. However, does this mean that India will set up separate tribunals for cases against chatbots? Indeed, the adjudicative machinery and the law, which should have been in place at least by the industrial age, is still not available in India, although we are leaping ahead into the AI era. Tort law is not the only area of concern.
AI is expected to disrupt labour markets by automating many tasks. Unlike in the post-industrial era, it will not be low skilled workers who lose their jobs, but many professionals who perform relatively routine activities. NASSCOM estimates that 10% of IT jobs will be lost to automation in the next ten years. It is likely that several jobs in the BPO/outsourcing sector, which are routine or commoditised, will be replaced by automation. This will affect India in particular since it an outsourcing hub.
However, India’s labour laws are archaic and remain relatively unchanged from the socialist era of the 1950s. Factories with over 100 “workmen” still cannot freely dismiss or lay them off. It is arguable that even software engineers fall within the definition of “workmen” under the Industrial Disputes Act, 1947. Therefore, labour laws have to be amended in order for India’s workforce to make the transition to providing the higher value non-routine functions.
At the same time, constant retraining has to be provided in order for people in the technology sector to find new positions. While the traditional areas of civil law, such as tort and labour law, have not been modernised to meet the demands of the industrial society, Indian law is also lacking in the new areas of law that are passed ostensibly to meet the changes in technology.
Limits of legislation
The Information Technology Act, 2000 does not deal with telephony and the internet. The Act was intended primarily to grant digital signatures and electronic records the same legal status as paper records. The only legislation dealing with telephony is the Indian Telegraph Act, 1885. Although the internet is a late 20th century phenomenon, the relevant legislation is still of the 19th century colonial era.
The Telecom Regulatory Authority of India (TRAI) has had to step in to fill the legislative and regulatory void as recently shown by TRAI’s rejection of Microsoft’s Free Basics scheme and its valiant upholding of the principle of net neutrality. This raises the question of whether or not fundamental internet concepts such as net neutrality are embodied in legislation instead of having to be articulated by the regulator when the same is challenged.
Data is at the core of AI, however, there is isn’t any legislation in India focused on protecting privacy rights in data. In 2009, the IT Act was amended to include section 43A, which creates a private cause of action for compensation by any person against a company that is negligent in implementing and maintaining reasonable security practices and procedures in handling of sensitive personal data, which causes wrongful loss or wrongful gain. This provision, which created statutory tort liability, was meant to assuage the concerns of foreign outsourcers of data during the outbreak of various identity theft cases.
In addition to being limited to sensitive personal data, the provision is practically unenforceable because, as mentioned, tort cases take decades to be heard in India. Therefore, the likelihood of anyone successfully suing a company and obtaining compensation under section 43A within a reasonable period of time is remote.
Moreover, section 43A does not address issues such as whether people have privacy rights in data, what data can be used for and notably, whether it can be sold or transferred.
The limitation of liability under the IT Act is also not suitable for the AI era. Section 79 of the IT Act embodies the concept, imported from US law, that IT service providers have to be treated in the same manner as phone companies or the postal service. They are mere carriers of content and cannot be held liable for that content.
Accordingly, section 79 exempts intermediaries such as ISPs from liability for any third party information, data or communication link made available or hosted by them except in certain limited circumstances. However, this concept of treating ISPs as telephone companies is woefully inadequate even for the new app-based technologies which was demonstrated by Uber’s claim that it is only an aggregator and cannot be held liable for what occurs during the taxi ride.
India has earned a reputation as an international technology hub with Bengaluru vying with Silicon Valley. It is time that law and adjudicative machinery also make the transition into the age of AI.