How Indian Companies Frivolously Violate Basic Data Security Rules

Even though the provisions on information security and privacy in India’s 2011 IT rules are rudimentary, they are, more often than not, callously ignored.

Credit: Flickr/

Founders of technology startups focus on core service functionality and signing up users. There are many other aspects of operating a company, such as accounting, human resources, legal requirements, which they expect to get to later, when they have funding or are more established. Often, data security and privacy fall among the items in the “later” column. However, startups that put off data security and user privacy are also setting themselves up for a fall. Examples of information security getting short shrift are easy to come by, as a recent experience with one health startup in India showed.

The company, call it X, works with schools to provide health checkups to children and provide access to the health-records online. A quick poke around on the X web site revealed several relatively simple steps that X could have taken to tighten up its data security.

Before getting into the details of X, it is worth noting that the negative effects of a security breach on a business can be significant: a loss of trust leading to fewer customers, a drop in the stock market valuation, lawsuits and penalties. In May 2017, Target, a major US retail store operator, settled with multiple state governments to pay $18.5 million for a 2013 data breach in which 40 million customer credit card details were taken. This is on top of a $10 million settlement with customers in 2015. Target estimated that the total cost of the breach was $202 million. Furthermore, while the Target breach stands out in its magnitude, data breaches are more common than people realise. Just in health-care alone, for example, there was on average more than one breach a day in the US.

In addition to market risks, data security is increasingly a matter of law. Company X’s data practices appear to be in violation of several of India’s Information Technology (Reasonable security practices and procedures and sensitive personal data or information) Rules, 2011. These rules encapsulate many data protection principles that are common in data laws around the world. In India, as elsewhere, medical information is considered “sensitive personal information,” for which the rules and penalties are more stringent.

An ounce of prevention

Any company that collects data needs to begin by getting consent to do so. The processes need to be baked in from the start. As people become more aware of how valuable their personal data is to business and others, and they become aware of their rights, a lack of consent could create legal headaches later. As it turns out, X does not explicitly get parental consent before collecting and storing medical information about their children. This is a grievous violation of data collection norms, and it is in contravention of subsection 5 (1) of the IT Rules which states that consent must be obtained.

Next, the ‘Terms and Conditions’ and ‘Privacy Policy’ pages matter. Whether people read them or not, they are a binding contract between the site users and the service provider, especially in places where the law takes consumer rights seriously. For example, when HBO Nordic was getting ready to launch, lawyers closely scrutinised the terms and they varied across countries. Startups may be able to find legal service providers who specialise in such matters. In the case of X, the Terms and Privacy pages contain dummy text copied from another completely unrelated site, including that site’s name. Therefore, anyone who registers with X actually has no idea what are the terms of use. When it comes to the law, this violates Section 4 of the IT Rules which says a data collector must provide policy for privacy and disclosure of information. It also violates subsection 5 (3), in that it does not disclose why the information is being collected, how the information may be used, and details about who is collecting and retaining the information.

Third, data privacy norms mandate letting people opt-out or requesting that information be deleted. This can mean an unsubscribe option or a complete deletion of the records, but then the data really must be completely scrubbed. It is common that companies let people unsubscribe from email distributions but the mails start again later. X has no account deletion option in the site and an email sent to a contact address requesting that all information about a child be deleted has gone unanswered for over two weeks. This violates subsection 5 (7), which says that a person “also have an option to withdraw its consent given earlier.”

Finally, from the start companies must take reasonable steps to implement strong data encryption wherever appropriate, sensitise employees to potential threats and establish online habits that are good for security. Startups may seek an external expert to review all aspects of data security to spot potential weaknesses. X’s founder says the company has taken back-end precautions to protect data, but it does not use HTTPS on its Web site. This means that all data transferred to and from the site, including passwords, can be snooped on by anyone along the path, such as those using the same public WiFi. Moreover, upon registration the site sends a confirmation email that contains the user’s password, which makes it open to snoopers. Taken together, these security flaws fail to live up to Section 8 of the IT Rules, which require data collectors to put in place “reasonable security practices and procedures.”

Startups may be tempted to put off these steps because enforcement is lax or the risk of a breach or an official audit seems low. However, there are still good reasons to take these preventive steps from the start.

A pound of cure

The first reason to maintain healthy data practices from the start is the commitment to security and service that companies owe to their customers. If a company’s stated mission is to help people live better lives, then the company should do right by its customers. X’s founder appeared surprised to learn that the text of the terms was not authentic, and that the website was not fully secure, so it seems that some of the lapses were an oversight during development.

If that reason sounds naïve and not “real-world” enough, a reason grounded in self-interest is that adding proper security later is more difficult and costly than doing it from the start. At some point, the company may lose the institutional memory about all the “ToDo” items in the code. Developers may move on or simply forget, and the vulnerabilities may sit there like a forgotten land mine. Take for example the recent breach at Zomato when 17 million customer records were taken. Not only was the breach preventable with better security practices, but the data was vulnerable because Zomato used questionable methods to store passwords. Despite several rounds of funding, Zomato didn’t address the issue until it blew up. The cost of rewriting code for a system with millions of users and also getting all of them to change their passwords is bound to be higher then when there were no users. Moreover, a breach can negatively impact a startup’s growth, to the point of putting it out of business.

Another reason has to do with potential acquisition and hidden liabilities. If the founders of X or any investors hope the company will some day be acquired or go public, these issues would come to light when a potential buyer does a data security audit. For a buyer from the US, the EU or any other jurisdiction where data protection laws are enforced more vigorously and civil liability is a possibility, every child’s record in the database without explicit consent is a potential civil or criminal liability. Any breach could open it to claims of negligence. It seems unlikely that any entity would want to buy into that. Even if the founders want to grow purely from revenue, any expansion into a jurisdiction that enforces data security could lead to liability.

Going back to the Target breach, the latest settlement also requires Target to take more steps to protect customer data, including appointing a person at an executive level to be responsible for information security and advising the company’s CEO and board. It must also store credit card data in an encrypted format and hire an outside party to conduct a security review. These remedial measures would have cost Target far less than $202 million and perhaps even less than $18.5 million had it taken the initiatives in the first place.

As Uber is finding out with respect to sexual harassment, not institutionalising acceptable norms of behaviour from the beginning can severely hurt a startup. Data security and privacy protections may appear to be less important aspects of getting an online business up and running. They need to be as integral and important a part of the development of the service as core service functions.

Sushil Kambampati (@SKisContent) is the founder of, a portal where anyone can suggest an RTI query simply and anonymously. He writes about online security and privacy.