Author: Vaishnavi.M , a student of The Tamil Nadu Dr. Ambedkar Law University
To the Point
Deepfakes synthetic media created using artificial intelligence to fabricate highly realistic images, audio, or video of individuals pose serious threats to truth, personal autonomy, and public trust. These technologies are being increasingly misused to spread misinformation, manipulate political narratives, commit financial fraud, and violate an individual’s right to privacy and consent, particularly in cases involving non-consensual explicit content.
The explosive growth of generative AI tools has made the creation and distribution of deepfakes alarmingly accessible, outpacing the ability of India’s current legal framework to respond effectively. While existing laws like the Information Technology Act, 2000, the Indian Penal Code, and the Digital Personal Data Protection Act, 2023 provide fragmented remedies under provisions related to cybercrime, defamation, obscenity, and identity theft, none of these statutes explicitly address the unique nature of synthetic media or AI-generated deception.
This legislative gap has triggered urgent calls for clarity and reform. Legal scholars, policymakers, and digital rights advocates stress the need for a dedicated legal framework that defines deepfakes, criminalizes their malicious use, and safeguards victims’ rights. Key legal challenges include attribution of liability, standards of consent in manipulated content, jurisdiction over cross-border digital harms, and the responsibility of tech platforms in content moderation.
In this evolving legal landscape, India must strike a delicate balance between regulating harmful deepfake content and protecting freedom of expression and innovation. Proactive legislation, supported by technological safeguards and public awareness, is essential to ensure that the legal system keeps pace with the fast-changing realities of AI-generated media.
Use of legal jargon:
The legal discourse around deepfakes often invokes terms such as mens rea and actus reus to assess the intent and the criminal act behind creating or sharing AI-generated fake content. In cases where reputational damage is caused without tangible loss, the maxim injuria sine damnum becomes relevant. Victims may also seek recourse through the right to be forgotten, especially when manipulated content violates informational privacy. Additionally, the principle of res ipsa loquitur may apply in situations where the deepfake’s very existence is evidence of wrongdoing. Public interest litigations (PILs) can be filed under Article 32 or 226 to highlight the systemic dangers posed by deepfakes. Moreover, when such content is inherently defamatory without needing proof of harm, it falls under the doctrine of defamation per se, making it easier for affected individuals to claim legal remedies.
The Proof :
The sudden rise in deepfake usage for political manipulation, pornography, celebrity misinformation, and revenge porn showcases the urgent need to bring AI under a robust legal framework. According to a 2023 report by Deeptrace, deepfake videos doubled in one year, with 90% being pornographic in nature, disproportionately targeting women.
In India, several public figures, including actresses Rashmika Mandanna and Katrina Kaif, became victims of deepfakes, sparking national debate and government concern.
Abstract
This article delves into the legal implications of deepfakes in India. It explores existing statutory protections under the Information Technology Act, 2000; Indian Penal Code, 1860; and the proposed Digital India Act. The inadequacy of current frameworks in addressing AI-generated deception is analyzed, followed by landmark cases and proposed reforms. This legal commentary emphasizes the need for AI-specific regulations and a techno-legal approach balancing freedom of expression with privacy and dignity.
Case Laws
- Justice K.S. Puttaswamy v. Union of India (2017)
This landmark judgment affirmed that the right to privacy is a fundamental right under Article 21 of the Constitution. Deepfakes, by intruding upon both informational and bodily privacy, pose a serious violation of this right. - Shreya Singhal v. Union of India (2015)
The Supreme Court struck down Section 66A of the IT Act, emphasizing the need to safeguard freedom of speech while permitting only constitutionally valid restrictions. This case is pivotal when considering how laws should address AI-driven content without stifling legitimate expression. - Rituparna Chakraborty v. Union of India (2019)
This case addressed the misuse of digital tools to create and share morphed images of women, establishing a precedent for legal accountability in cybercrimes, especially those targeting women’s dignity and consent—issues central to deepfake misuse. - State of West Bengal v. Committee for Protection of Democratic Rights (2010)
The Court upheld the importance of conducting investigations into novel and complex crimes, including cyber offenses, under the umbrella of Article 21 protections. This reinforces the need for state responsibility in addressing digital harms like deepfakes. - Khushbu v. Kanniamal (2010)
In this defamation case, the Court emphasized the protection of individual reputation from unwarranted attacks. It is highly relevant to deepfakes, particularly those targeting public figures or celebrities, where reputational damage is significant.
Conclusion:
Deepfakes, while a marvel of technological advancement, have opened Pandora’s box in terms of privacy violation, reputational harm, and misinformation. Indian law is currently ill-equipped to tackle the nuances of AI-generated deception. While some relief can be found under the IT Act and IPC, comprehensive legislation is imperative. India must take a preventive-corrective-punitive approach. Until then, judicial interpretation and executive vigilance remain our primary defense.
FAQs
1. Are deepfakes illegal in India?
Not explicitly. However, they may fall under various laws like IPC (for forgery or defamation) and IT Act provisions on obscenity or privacy.
2. What action can a deepfake victim take?
File a complaint under IPC Sections 500/509 or IT Act Sections 66E/67. Approach the cybercrime cell or National Cyber Crime Reporting Portal.
3. Can social media platforms be held liable?
Under intermediary guidelines, platforms are required to take down offensive content upon notification. Safe harbor can be lost if they fail to act.
4. What is the government doing about deepfakes?
The Ministry of Electronics and IT (MeitY) has warned platforms and is working on new guidelines under the proposed Digital India Act.
5. Does India recognize the “right to be forgotten”?
Currently recognized by some High Courts, but not universally enforceable. The DPDP Act is expected to address this right more clearly.
