Recently, with the advent of generative artificial intelligence (Gen AI), there has been an increase in digitally altered synthetic media known as deepfakes. These can take various forms, including audio, images, and video. Through these deepfakes, movie scenes, songs and popular media are being altered. This has led to a dangerous atmosphere where Gen AI is being used to fabricate reality and attempt to mislead people. Recently, the Delhi high court, in the case of Sadhguru Jagadish Vasudev v Igor Isakov & others (2025) protected the personality rights of a spiritual guru against AI deepfakes morphing and doctoring his voice and speeches. The court observed there was mala fide intent of the defendants in using modern technology to modify images, voice, likeness etc for commercial gain. The court also observed that if not controlled, this kind of modification of media will spread like a pandemic with wide, uncontrollable repercussions, especially since it is being spread on social media. Sadhguru may not be alone; several actors and popular media personalities have approached courts seeking protection of their personality, image rights and, most importantly, their privacy against the use of Gen AI. Most recently, the actor Dhanush has voiced his concerns regarding the altering of a movie ending despite his objections. He also hoped for stricter regulation.It is a matter of concern that deepfakes are also being used to generate pornographic videos. Recently, the Delhi high court, by way of an injunction, issued a takedown order against the creation of pornographic content using the body of a social media influencer. The court, on examining the material, was constrained to note that the content is completely ‘appalling,’ ‘deplorable,’ ‘defamatory,’ and is a ‘patent breach of the fundamental rights of the plaintiff.’ The court further asked for the basic subscriber details to be disclosed. The high court has in separate proceedings constituted a committee on deepfakes and a report has been sought. The Union government has tried to address the problems of deepfakes. It has highlighted that the existing regime under the Information Technology Act, 2000 (IT Act) and the Information Technology (Intermediary Guidelines and Digital Media and Ethics Code) Rules, 2021 (2021 IT Rules) and the Bharatiya Nyay Sanhita can deal with deepfakes. It has also issued multiple advisories regarding the same. The government has issued three advisories to intermediaries (companies offering services) which address AI deepfakes. The first advisory was issued on November 7, 2023, which amongst other things, reportedly required intermediaries to identify misinformation and deepfakes to the extent that information violates the rules and regulations etc. Further, as per the advisory, users must not host such information, content or deepfakes and the intermediaries must remove such content within 36 hours of reporting. The reporting period for content which appears to be on the face of it sexually explicit or which impersonates (through electronic form) the complaining individual must be removed within 24 hours after the receipt of a complaint as per Rule 3(2)(b) of the 2021 IT Rules. This could cover the takedown of sexually explicit deepfakes, those which contain full or partial nudity or depict the individual in a sexual act or conduct. This Rule could further cover the takedown of those deepfakes which indulge in impersonation of an individual provided the instance is reported by them to the company. The Rule also covers the takedown of artificially morphed images. Once a complaint is received, the intermediary or company must take all reasonable and practicable measures to remove or disable access to the content which is hosted, stored, published, or transmitted by it. Through signals from communication towers, people exchange ideas via digital transmission across different locations, accomplishing their work and building a digital community.The second was issued on December 26, 2023. In this advisory, the government asked intermediaries to strictly comply with the provisions of the 2021 IT Rules. According to the advisory, intermediaries must inform users of the consequences of posting content prohibited by Rule 3(1)(b) of the 2021 IT Rules. They must also inform users of the penal provisions contained in the Indian Penal Code, 1860 and the IT Act at the time of first registration, at every instance of login and while sharing and uploading content. The advisory also states that user agreements and terms of service must state that intermediaries have an obligation to report violations to law enforcement agencies. Intermediaries also must make reporting content easy for users. The government warned that not complying with the advisory would make an intermediary risk losing safe harbour protections granted as per section 79(1) and 79(2)(c) of the IT Act. The third advisory dated March 15, 2024 requires watermarking deepfake content and making traceable the person who changes the marking of deepfake content once it is watermarked.The advisory dated December 26, 2023 appears to require intermediaries to actively monitor deepfake content and report violations to law enforcement agencies and mark content as deepfake content. However, advisories do not have the same force as statutes, rules, notifications, and regulations. This is why it is important that the government strengthens the regulation of deepfakes in India. The expectation of the government that intermediaries actively monitor and report violations of Rule 3(1)(b) is unrealistic considering the sheer volume of deepfakes which are being generated by users every second. A more realistic approach (which is currently enforced through the 2021 IT Rules) is to require intermediaries to expeditiously deal with content reported by other users or concerned persons. The ostensible active monitoring of content could have the effect of negatively impacting the privacy users expect while communicating with Gen AI systems and would make it difficult to enhance privacy when it comes to Gen AI chatbots. Given how content created by people is perceived is subjective, it would require sifting through and classifying user content which would prove too onerous and should not be the responsibility of an intermediary. Recently, in response to unstarred question No. 708, on regulation of generative AI tools, answered by the Minister of State for Electronics and Information Technology, Jitin Prasada, the government specified the provisions in existing law which govern Artificial Intelligence. According to the government, Section 66C, 66D, 66E, 67A and 67B of the IT Act can be invoked for AI misuse. The government has also cited Section 111, 318, 319, 353 and 356 of the Bharatiya Nyaya Sanhita which criminalises offences such as continuing unlawful activities such as cyber-crime and defamation. The government has also referred to the Digital Personal Data Protection Act, 2023 which has not yet come into force, the 2021 IT Rules and a CERT-IN advisory on steps to be taken to minimise adversarial effects of AI. Deepfakes also pose a potentially serious threat to intellectual property rights when movies, songs, eBooks and audio-visual creations can easily be morphed or digitally altered. There can be instances where the altering of such media is done for comedic or satirical purposes. However altering such media without the consent of the relevant persons poses serious ethical and legal risks. Digitally altered media can prove beneficial for reducing the costs of making movies or content, making instructional videos etc, but ideally with consent obtained from the relevant persons. The government has indicated that it is not going to adopt the heavy AI regulation model as followed by the USA and the EU. It intends to encourage innovation, creativity and entrepreneurship while ensuring that no harm occurs to society at large. However, a separate law specifically dealing with AI deepfakes is the need of the hour. The current ad hoc regime relying on secondary laws, notifications and advisories is not enough to deal with the harms which can be caused to users by Gen AI.Deepfakes have the potential to spread faster than the speed of light and can be difficult to control. They can easily be used to create social unrest, false propaganda, hate mongering, spreading misinformation and creating negative narratives around various issues and lead to a proliferation in cyber frauds. Therefore, it is imperative that the government brings a law which defines and regulates the bona fide use of deepfakes while penalising their misuse.As per an Economic Times report, AI powered ‘nudify’ Apps, which generate synthetic deepfake nude photos are leading to increased cases of sextortion directed at minors. It also reported that Meta is filing a lawsuit against a nudify App for violating its policy on publishing ads on its platform. Apps specifically marketed and directed at generating unlawful content are a matter of great concern and require policy intervention. Going deeperThe DPDP Act, the IT Act, the 2021 IT Rules and even the Bharatiya Nyaya Sanhita collectively contain provisions which can address the problem of AI generated deepfakes which are transmitted electronically. Further, the advisories issued by the government also provide guidance on dealing with unlawful deepfake content. If the AI generated deepfake is obscene or pornographic or impersonates another person, it will be covered by section 66E of the IT Act which deals with punishment for violation of privacy where an individual intentionally or knowingly captures publishes or transmits the image of a private area of any person without his or her consent under the circumstances violating their privacy. Section 67 deals with punishment for publishing or transmitting obscene content in electronic form, section 67A deals with the punishment for publishing or transmitting material containing sexually explicit act in electronic form, and section 67B deals with punishment for publishing or transmitting material depicting children in sexually explicit acts etc in electronic form. All these provisions carry significant fines and jail time if enforced and implemented.Also read: Data Protection for Whom?Rule 3(1)(b) of the 2021 IT Rules specifically requires platforms to inform users not to host or upload or store or share obscene or pornographic or paedophilic content or that which is invasive of a person’s privacy including bodily privacy on their platforms along with the consequences for non-compliance. Yet Gen AI is being used to generate content which can fall foul of these provisions. Rule 4 of the 2021 IT Rules imposes additional due diligence obligations on significant social media intermediaries. As per a gazette notification dated February 25, 2021, a significant social media intermediary is one which has 50 lakh or more registered users in India. Consequently, the leading Gen AI companies may qualify as significant social media intermediaries and need to implement additional due diligence measures. Rule 4(4) of the 2021 IT Rules requires significant social media intermediaries to implement technology-based measures, including automated tools or other mechanisms to proactively identify information that depicts any act or simulation in the form of rape, child sexual abuse or conduct, whether explicit or implicit any identical information which has been previously removed or disabled by the intermediary under rule 3(1)(d.) The intermediary must display a notice to the user attempting to access such information that this information has been identified as falling in the categories specified under the sub rule. The provision also makes it clear that the measures taken must be proportionate, having regard to the freedom of speech and expression, privacy of users on the platform, including interests protected through the appropriate use of technical measures. Yutong Liu & Kingston School of Art / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/Most Gen AI companies have formulated content and safety guidelines which bar content such as sexual content, or content generated without consent, or that which promotes self-harm. If a user inputs something which triggers this content or safety policy, the Gen AI will not generate the output and instead provide reasons why it is unable to do so. It is unclear whether such organisations immediately review content which triggers the content and safety policy and report it to law enforcement agencies. As it stands, while significant social media intermediaries are required to build tools to scan content on their systems, this does not conclusively indicate whether unlawful content is being automatically monitored and reported or taken down once a user or a court or government agency reports it to the intermediary. Given that users freely interact with Gen AI, it would be disproportionate to require intermediaries to actively report any perceived violation of the content and safety guidelines automatically to law enforcement. Clearly, there are some AI systems which have fewer guardrails, which are purportedly being utilised to generate unlawful deepfakes and ‘not safe for work content’ (NSFW). This can be resolved by having uniform statutory requirements for Gen AI which then finds their way to their content and safety guidelines.The DPDP Act protects the personal data of individuals and makes consent the primary base of processing of data or certain legitimate uses as is evident from section 4 of the Act. Section 5 details how consent is to be obtained, while section 6 specifies the mandatory nature of consent obtained from a user. Crucially, the DPDP Act does not apply to data made publicly available by a user. Section 3(c)(ii) of the DPDP Act states that the provisions of the Act will not apply where personal data is made or is caused to be made publicly available by the individual to whom the data relates and by any other person who is obligated under any law in force in India to make that data publicly available. The illustration provided to this provision interestingly states that if an individual X, while blogging her views has publicly made her personal data available on social media, then in such case the provisions of the Act will not apply. This can seriously impact users, since if their social media accounts are not made private but remain public, then uploading photos and videos to such an account can lead to it ending up being utilised by AI and used to generate digitally altered media of such persons. The question which needs to be asked is whether those who use social media and have public accounts, such as influencers, celebrities, politicians, and journalists deserve lesser protections from Gen AI created deepfakes than those who have private accounts and do not make their data publicly available.The use of deepfakes raises complex subjective issues such as what can be considered lawful and unlawful. However, some safeguards and guardrails have to be built into Gen AI systems to minimise harm caused to all affected parties.And so…Currently, there is no specific law in India governing Gen AI. What India has is a patchwork of legislation. This is secondary legislation which is being utilised to deal with the harmful effects of Gen AI. However, given the fact that the primary legislation, the IT Act, provides blocking orders on only limited grounds, GenAI Apps themselves may go unregulated and be freely available on the smartphone App stores. To be sure, outlawing deepfakes alone is not the answer as it could impact even deepfakes which do not target specific individuals but are fictional or satirical. Instead of advisories and a patchwork of legislation what is required is a separate and distinct law on Gen AI or specifically on unlawful deepfakes. The government could define what a deepfake is in the law and what is considered as being at the minimum lawful and unlawful deepfakes with illustrations. It could provide specific penalties for generating, publishing, or transmitting unlawful deepfakes such as those which blatantly violate copyright or are non-consensual deepfakes. These penalties imposed need not only be criminal and could simply be monetary penalties. The need of the hour is to provide clarity to users on what is permissible and impermissible use of Gen AI. The 2021 IT Rules do give guidance on what kind of content is impermissible. However, the consequences for non-compliance is termination of access to the service. Currently, it is the IT Act and the Bharatiya Nyay Sanhita which impose criminal penalties for unlawful deepfakes. The government must strike a careful balance and ensure that there is not a blanket ban on deepfake content such as satire, comedy, and fictional content. The nature of the regulation must be careful to not stifle the freedom of speech and expression and should be within the confines of reasonable restrictions. Outlawing deepfakes in entirety is not a solution because image generation is an integral feature of Gen AI systems. However, it is essential to protect users and citizens from the harmful effects of Gen AI and provide some deterrence against its misuse. Raghav Tankha is a lawyer practising in Delhi. Views are personal.