LEGAL CHALLENGES OF DEEPFAKE TECHNOLOGY AND AI-GENERATED CONTENT IN INDIA

Author: Chhavi Das, ILS Law College

To the Point

Deepfakes and AI-generated content can be powerful swords for entertainment and exploration, but high-powered darts, too. For the immense Indian online space, the existing legal architecture relies on scattered provisions (IPC, IT Act, election laws) not drafted with the intent to address AI-based impersonation. Even the government has begun to realign: in 2023-24, it issued advisories urging platforms to move quickly against deepfakes (e.g., takedowns within 36 hours), and the Election Commission urged parties to take down manipulated election ads within hours. Any new law, though, would need to be specially focused to survive constitutional review (e.g., Shreya Singhal on overbroad speech regulations). Comparative models (EU, US, China) are focusing on labelling and transparency of artificial media. Overall, India requires a carefully considered strategy that integrates legal reform, technological solutions (filters, watermarks), and public education to address deepfake harms without excessively curbing lawful speech.

Use of Legal Jargon

Deepfakes are AI-generated or altered media that mimic how a real person looks or sounds, often making it hard to tell what’s fake and what’s real. Deepfakes can be easily impersonated by public figures or private individuals.

AI-Generated Content: Any content (text, images, video, sound) generated independently by artificial intelligence programmes, without (or beyond) direct human creativity.

Mens rea is the mental state showing a person meant to commit a crime or knew their actions were wrongful. For deepfakes, proving mens rea might mean showing that the creator was aware the content was fictitious or libelous.

Res Ipsa Loquitur: A rule with the Latin translation “the thing speaks for itself.” Not strictly on-point with deepfakes, but similar to shifting burdens: once a deepfake is apparent, intent or negligence in making it can sometimes be inferred.

Intermediary Liability: Legal accountability of online platforms (e.g. Facebook, YouTube) for user-generated content. Intermediaries forfeit safe-harbor protection under India’s IT Act and Rules if they don’t remove illicit content upon notice.

Doctrine of Proportionality: A principle of the constitution (in the framework of Article 19(2) analysis) that seeks to ensure that any such limitation on free speech should be narrowly framed to achieve a legitimate state interest. Overbroad or vague prohibitions on speech (like Section 66A of the IT Act) were invalidated in Shreya Singhal v. UOI.

The Proof

  • Draft Amendments to IT Rules (2025): The government has suggested new responsibilities on social media intermediaries in the context of deepfakes. The 2025 draft of the Intermediary Guidelines and Digital Media Ethics Code Rules allegedly would impose a requirement that platforms implement proactive AI filters, watermarking capabilities, and delete identified deepfake material within 36 hours of being notified. This formalizes a previous advisory method; e.g., a November 2023 advisory already “mandated” intermediaries to take down malicious content in 36 hours. Non-compliant platforms may risk losing safe-harbor immunity, allowing for suits under the IPC. Critics warn that automation in itself is not perfect, and expedited takedowns raise free-expression concerns.
  • Election Commission Advisory (Apr. 2024): Ahead of the 2024 general elections, the Election Commission released an advisory prohibiting political parties from utilizing “deepfakes and other kinds of disinformation” in campaign literature. The parties were directed to “delete any deepfake audio or video within three hours of becoming aware of its presence”. This came after a Delhi High Court ruling and heated public debate following a number of viral doctored videos (see below). The ECI move indicates official acknowledgement that AI-manipulated content distorts democratic debate, but one that is based on self-reporting by parties and lacks an independent enforcement division.
  • Notable Cases and Incidents:
  1. Mandanna Deepfake (Dec. 2023): Following a non-consensual deepfake clip of actress Rashmika Mandanna (and a companion case against another actor) becoming viral, FIRs were registered by Delhi Police. The complaint under IPC Sections 465 and 469 (forgery) and IT Act Sections 66C and 66E (identity theft and privacy). This is the first of the forgery and identity provisions to be used to address a deepfake, and it points out that current statutes (albeit generic) are able to touch at least some harms.
  2. Amit Shah Video (Apr. 2024): Six individuals linked to the social media department of the Congress party were detained by Delhi Police for distributing a forged video of Home Minister Amit Shah. Shah’s office complained the video included statements “he never even thought of”. The police have used IPC Sections 469/500 (forgery/defamation) and IT Act provisions for cheating by impersonation (e.g. Section 66D). This shows that conventional IPC/IT sections on reputation and identity are being stretched to tackle deepfake-driven political misinformation.
  3. PIL for Deepfake Law (2023–25): Legal activists have started seeking a separate law. For instance, a PIL (moved by lawyer Narendra Goswami) requested a Court-scrutinized expert committee to prepare model AI/deepfake regulations under the IT Act. In May 2025, the Supreme Court refused to entertain this PIL, noting that the Delhi High Court was already hearing related issues. The SC did not go in on the merits but indicated that fresh rules could more effectively come about through legislation than judicial decree. In the meanwhile, a private petition in the Delhi High Court (Dr. Sachin Gupta v. Union of India, May 2024) pointed to increasing cases of deepfakes but did not yield a precedent-setting ruling.

Abstract

India does not have a specific legal framework in place to control deepfakes and instead depends on disconnected pieces of the IT Act, 2000, and IPC including impersonation, obscenity, defamation, and hate speech. These were, however, written before the advent of AI and are incapable of handling complexities such as mens rea in viral misinformation or intermediary liability. Consequently, numerous perpetrators go unpunished.

The 2025 IT Rules draft recognizes increasing concerns, suggesting anticipatory detection (e.g., watermarking, AI filters) and takedown in 36 hours. However, such steps have to clear constitutional hurdles. In Shreya Singhal v. Union of India (2015), the Supreme Court invalidated vague laws, emphasizing that restrictions on speech have to be narrowly defined under Article 19(1)(a). Blanket prohibitions on AI-generated political content, for example, can stifle satire or commentary.

Privacy concerns also hang over. Justice K.S. Puttaswamy v. Union of India (2017) established privacy as a constitutional right. Victims of deepfake porn or impersonation invoke this right, but current laws such as Sections 66E (IT Act) or 354C (IPC) provide only limited recourse.

Internationally, the EU, U.S., and China are implementing AI-specific legislation focusing on watermarking, disclosure, and sanctions. India’s 2025 proposals are positive but need to take a proportional approach—clear definitions, transparent safeguards, and rights-based shields. Legal reform coupled with technology tools and consciousness is the key to fighting back against deepfakes.

Case Laws

  1. Shreya Singhal v. Union of India (2015) 5 SCC 1: Supreme Court uniformly held Section 66A of the IT Act to be unconstitutional, being void for vagueness and overbreadth. Vague speech restrictions chill free expression and will fail Article 19(2)’s test of reasonableness, held the Court. Singhal is a touchstone: it suggests that any regulation of deepfakes, even benevolent, must be strictly defined and proportionate or risk violating Article 19(1)(a) without meeting constitutional requirements.
  2. Subramanian Swamy v. Union of India (2016) 7 SCC 221: This held criminal defamation (IPC 499–500) to be constitutionally valid. The Court held that reputation is a legitimate state interest under Article 21 and that laws of defamation are not arbitrary per se. This equilibrium indicates that deepfake harm to reputation can be brought under current defamation laws without infringing free speech. As Swamy espoused, speech limits are valid when narrowly defined to protect dignity.
  3. Justice K.S. Puttaswamy v. Union of India (2017) 10 SCC 1: A seminal ruling acknowledging the “Right to Privacy” as inherent to Article 21. Puttaswamy determines invasions of personal information or identity are subject to strict proportionality. Applied to deepfakes, Puttaswamy supports claims against non-consensual sexualized AI content or identity theft: victims can claim their dignity/privacy have been invaded, subjecting any law allowing or not prohibiting invasions to constitutional scrutiny.
  4. PIL on AI/Deepfakes (Narendra Kumar Goswami v. UOI, 2025): While not a published judgment, this ongoing litigation illustrates judicial caution. Goswami’s petition sought a court-monitored committee to draft deepfake regulations (including watermarks and a 24-hour takedown mandate). In May 2025 the Supreme Court refused to entertain the PIL on the merits mentioning that the Delhi High Court was already considering orders related to deepfakes. That is, unless Parliament intervened, courts would rather avoid policy making (such as delineating deepfake offences) to the executive or legislature. Yet, the hearing highlighted that deepfakes were at the top of the judiciary’s agenda, and that government guarantees (through advisories/drafts) will significantly affect legal outcome. 

Conclusion

Deepfakes are a two-edged sword: they permit creative use of the media and satire, but also have the power to ruin reputations, incite social tensions, and distort democracy. In the Indian context, the legal reaction has been nearly entirely reactive. Traditional laws (forgery, defamation, obscenity, etc.) are being employed by authorities to counter evident abuses, but no single statutory framework exists. The 2025 draft IT Rules (and current MEITY advisories) demonstrate enhanced awareness, requiring quick take-down of AI-altered content and application of tech solutions (such as watermarking and metadata tagging) to identify deepfakes. These plans follow global trends focusing on transparency and proactive moderation. Any resolution must be treated with caution. Subject to Article 19(1)(a), excessively restrictive curbs on speech are not permissible.

Therefore, legislation targeting deepfakes needs to be accurate: e.g., to identify properly fake political deepfakes from genuine parody, or criminalize commercial deepfake cons without ensnaring harmless AI art. Similarly, privacy interests (recognised in Puttaswamy) mean one’s likeness cannot be used without permission, but limited controls (marking AI-news thus) could be acceptable. Finally, defeating deepfakes will involve a multi-faceted strategy: legal change to bridge loopholes and fine-tune punishments, platform responsibility to flag and watermark fake media, technical solutions (digital signatures, AI flags), and citizen literacy so that citizens can identify imitations. Only through an integration of law, technology, and education can India aspire to have trust in its digital information system along with safeguarding basic liberties. The challenge is substantial, but the interests at stake—preserving privacy, reputation, and democratic integrity—are greater yet. 

FAQS

  1. Are deepfakes illegal in India? 

There is no Indian law specifically prohibiting “deepfakes” by name, but offensive material can be prosecuted under current laws. Defamatory deepfakes are covered under IPC Sections 499–500, unwanted pornographic deepfakes under IT Act Sections 67/67A and IPC 292, and identity-based under IT Act Sections 66C–66D. Police have already resorted to using IPC Sections 465, 469, and IT Act provisions in recent times.

  1.  What are the punishments for sharing deepfakes? 

Penalties for deepfakes depend on the offence. Sexual deepfakes can result in 5 years’ imprisonment (IT Act §67A) for a first-time offence, 7 years for second offence. Obscene but non-sexual deepfakes are under §67 (up to 3 years, then 5 years). Defamation (IPC 499–500) is up to 2 years. Forgery (IPC 465, 469) is 2 to 7 years. Identity fraud (IT Act Section 66D) is up to 3 years. All in all, offenders can face 3–7 years’ imprisonment together with fines.

  1. What are the new proposed obligations of platforms? 

The 2025 draft IT Rules mandate social media websites to automatically identify and delete AI-generated content using automated tools and delete deepfakes or disinformation within 36 hours of being notified. Mandatory watermarking or metadata tagging is also suggested. Non-adherence could deny platforms legal immunity. These augment the 2021 IT Rules and 2023 advisories.

Leave a Reply

Your email address will not be published. Required fields are marked *