Regulating Deepfake Technology: The Legal Challenges and Way Forward


Author: Ritika Ranjan, USLLS


To the Point


Deepfake technology, driven by advanced artificial intelligence, enables the creation of hyper-realistic fake images, audio, and videos, offering remarkable creative possibilities but also posing serious legal and ethical challenges. While it can enhance storytelling, education, and entertainment, its misuse for misinformation, defamation, political manipulation, fraud, and non-consensual pornography raises grave concerns about privacy, consent, and security. In India, the absence of a specific legal framework to regulate deepfakes leaves victims vulnerable and complicates enforcement. Existing provisions under the Information Technology Act, 2000, and the Indian Penal Code provide some remedies but are insufficient to address the unique harms caused by deepfakes. Issues such as violation of privacy rights, reputational damage, threats to democratic processes, and the delicate balance between freedom of expression and preventing harm highlight the urgent need for targeted legislation. India should enact laws explicitly criminalizing malicious deepfake creation and distribution, strengthen privacy protections, hold platforms accountable, and encourage technological solutions for detection and prevention. Additionally, public awareness and digital literacy are essential to mitigate the impact of fake content. A balanced approach that safeguards individual rights without stifling innovation is crucial to harness the benefits of deepfake technology while protecting society from its risks.


Use of Legal Jargon


In discussing deepfake technology and its legal implications, it is essential to understand certain legal terms and doctrines that form the backbone of regulatory discourse.
Right to Privacy: The right to privacy has been recognized as a fundamental right under Article 21 of the Indian Constitution in the landmark judgment of Justice K.S. Puttaswamy (Retd.) v. Union of India. Deepfakes often involve the unauthorized use and manipulation of a person’s likeness or voice, amounting to a direct invasion of this right.
Defamation: Under Section 499 of the Indian Penal Code (IPC), defamation refers to any act that harms a person’s reputation through false statements. Deepfakes can be used to create false narratives, thereby harming an individual’s social and professional standing.
Data Protection: While India currently lacks a comprehensive data protection law (with the Personal Data Protection Bill still pending), the concept emphasizes safeguarding personal data from unauthorized processing. Deepfakes involve the misuse of biometric data (faces, voices), necessitating stricter data protection mechanisms.
Freedom of Speech and Expression: Protected under Article 19(1)(a) of the Indian Constitution, this right is not absolute and is subject to reasonable restrictions under Article 19(2), including defamation, public order, and decency. Deepfakes challenge the balance between creative expression and societal harm.
Mens Rea and Strict Liability: Criminal jurisprudence traditionally requires a guilty mind (mens rea) for liability. However, in cases involving deepfakes disseminated for harmful purposes, strict liability principles might be considered, especially when public interest or large-scale harm is involved.
Intermediary Liability: Under Section 79 of the IT Act, 2000, intermediaries (like social media platforms) have conditional safe harbor protections provided they act expeditiously to remove unlawful content upon notice. The rise of deepfakes has renewed discussions on imposing stricter obligations on intermediaries to detect and remove synthetic and harmful content proactively.
Tort of Misappropriation of Likeness: Although not codified explicitly in India, this common law tort protects an individual against unauthorized commercial or personal use of their identity or likeness, which is highly relevant to deepfakes.
Cyber Defamation and Cyber Crime: Deepfakes, when used maliciously online, can be prosecuted as cyber defamation or other cyber crimes under Sections 66C (identity theft), 66D (cheating by personation), and 67 (obscenity) of the IT Act.
Doctrine of Reasonable Expectation of Privacy: This doctrine implies that individuals have a right to expect privacy in their personal lives, including their images and videos. Deepfakes often violate this expectation.
Due Diligence Obligation: Recent intermediary guidelines in India emphasize the need for platforms to exercise due diligence in monitoring and controlling content. Failure to do so can remove their safe harbor protection, making them liable for deepfake dissemination.
Together, these legal concepts provide a comprehensive framework for analyzing the threats posed by deepfakes and designing targeted legal interventions. They also highlight the urgent need to update and harmonize existing laws to respond to emerging technological challenges.

The Proof


A deepfake is a form of synthetic media where artificial intelligence, particularly neural networks like Generative Adversarial Networks (GANs), is used to superimpose or replace a person’s likeness in an image, audio, or video with that of another individual. While this technology initially emerged as a creative tool for entertainment and filmmaking, it has quickly become a double-edged sword, raising serious legal and ethical challenges around the world.
Globally, the misuse of deepfakes has been evident for several years. In 2017, a Reddit user brought significant attention to deepfakes by creating pornographic videos featuring celebrities without their consent. This triggered widespread ethical outrage and sparked debates on digital consent and privacy violations. The threat of deepfakes to democratic institutions became even more apparent during the 2020 United States presidential elections. Manipulated videos aimed at discrediting political candidates and spreading false information were circulated online, highlighting the potential of deepfakes to undermine electoral integrity and mislead voters on a massive scale.
The situation in India mirrors global concerns but is compounded by a lack of clear legal measures. In 2020, a deepfake video featuring Delhi Chief Minister Arvind Kejriwal surfaced, altered into multiple regional languages and shared widely during election campaigns. Such manipulations can significantly distort public opinion and impact democratic processes. More recently, a morphed video of popular actress Rashmika Mandanna went viral, raising serious questions about the protection of individual privacy and consent in the digital age. These incidents highlight how deepfakes can be weaponized to target public figures, spread misinformation, and violate the personal dignity of individuals.
Despite these growing threats, India currently lacks a dedicated legal framework to address deepfakes. The Information Technology Act, 2000 (IT Act), which serves as the primary legislation governing cybercrimes and electronic commerce in India, does not explicitly refer to deepfakes. The Act covers offenses like hacking, data theft, and cyber defamation, and certain provisions, such as those related to identity theft (Section 66C) and publication of sexually explicit material (Sections 67 and 67A), can be applied in specific cases involving deepfake content. However, these are reactive and fragmented measures that fail to comprehensively tackle the creation and spread of synthetic media.
Additionally, the Personal Data Protection Bill, 2019, which aims to safeguard individual data privacy, remains pending and has not yet been enacted as of 2025. While it proposes to protect personal data and regulate its processing, it does not directly address the challenges posed by deepfake technology or the manipulation of biometric and facial data. This regulatory vacuum leaves victims with limited remedies and allows perpetrators to exploit technological loopholes with relative impunity.
The lack of explicit legal provisions against deepfakes not only exposes individuals to potential harm but also threatens public trust in media, politics, and institutions. To counter these risks, India urgently needs to introduce targeted legislation that criminalizes malicious deepfake creation and dissemination, strengthens data protection laws to include biometric data misuse, and establishes strict obligations for social media platforms to detect and remove such content swiftly. A robust legal and technological response is essential to safeguard privacy, protect democratic processes, and maintain the integrity of public discourse in the digital era.


Abstract


Deepfake technology represents a fascinating yet deeply unsettling convergence of artificial intelligence and digital media. On one hand, it opens up exciting possibilities in fields such as filmmaking, advertising, education, and even virtual reality, allowing creators to generate lifelike visuals and immersive experiences that were unimaginable a few years ago. However, the darker side of deepfakes cannot be ignored. When misused, they pose serious threats to individual privacy, personal dignity, security, and even the foundation of democratic institutions. By allowing the creation of hyper-realistic fake videos and audio clips, deepfakes can be weaponized for misinformation, character assassination, election manipulation, financial fraud, and non-consensual explicit content.
In India, the current legal framework is not adequately equipped to address these emerging dangers. The Information Technology Act, 2000, and certain provisions of the Indian Penal Code offer some remedies against cybercrimes and defamation but do not specifically cover the complex nature of synthetic media manipulation. Victims of deepfake attacks often face significant challenges in seeking justice, while perpetrators exploit these legal gaps to spread harmful content with little accountability.
Drawing inspiration from global best practices — such as stricter laws in the European Union and targeted legislation in some U.S. states — India urgently needs to formulate comprehensive laws specifically aimed at regulating deepfake technology. These laws should explicitly criminalize the malicious creation and dissemination of deepfakes, impose stringent penalties on offenders, and establish clear responsibilities for social media and digital platforms to detect and remove such content promptly. At the same time, it is crucial to ensure that regulations do not stifle creative and legitimate uses of this technology. A balanced, forward-looking legal approach is essential to protect individual rights and public trust, while still encouraging technological innovation and creative freedom.


Case Laws


Though no Indian case law specifically addresses deepfakes yet, related judgments provide interpretative guidance:
1. Justice K.S. Puttaswamy (Retd.) v. Union of India (2017) 10 SCC 1
The Supreme Court recognized the right to privacy as a fundamental right under Article 21. Deepfakes directly infringe upon an individual’s right to control personal data and image.
2. Shreya Singhal v. Union of India (2015) 5 SCC 1
This case struck down Section 66A of the IT Act but upheld the liability of intermediaries under Section 79. It underlines the need for careful balancing between freedom of expression and reasonable restrictions.
3. R. Rajagopal v. State of Tamil Nadu (1994) 6 SCC 632
Recognized the right to prevent unauthorized use of one’s image and life story, supporting the principle of informed consent.
4. Khushboo v. Kanniammal (2010) 5 SCC 600
Addressed the tension between freedom of expression and reputation, relevant to deepfake defamation cases.
5. Avnish Bajaj v. State (NCT of Delhi) (2005) 3 Comp LJ 364 Del
The Baazee.com case emphasized intermediary liability for objectionable content, providing a foundation to discuss platforms’ obligations regarding deepfake content.


Conclusion


Deepfake technology represents one of the most intriguing yet dangerous advancements of our time. On the positive side, it unlocks creative possibilities in entertainment by enabling realistic visual effects and digital resurrection of historical figures. In education, deepfakes can bring historical events or complex scientific concepts to life in immersive ways. Additionally, they can improve accessibility, for instance, by generating personalized avatars for people with disabilities.
However, these benefits come with significant risks. Deepfakes can be weaponized to spread misinformation, manipulate elections, damage reputations, and violate privacy through non-consensual explicit content. In a country as diverse and populous as India, the potential for misuse is particularly high. The rapid spread of manipulated videos can easily incite violence, fuel communal tensions, or influence voter behavior, thereby threatening democratic integrity.
To address these challenges, India urgently needs to strengthen its legal and technological safeguards. Amending the Information Technology Act, 2000, or enacting a dedicated Deepfake Regulation Act would be a crucial step forward. Clearly defining what constitutes a deepfake offense and prescribing stringent penalties can deter potential offenders. Additionally, stricter due diligence obligations must be imposed on intermediaries like social media platforms, requiring them to actively detect, label, and remove deepfake content.
Investing in the development of advanced technological tools for deepfake detection and authentication is equally important. Public awareness campaigns can empower citizens to recognize and critically analyse synthetic media, reducing the likelihood of manipulation.
Given the borderless nature of the internet, international cooperation is essential for effective regulation and enforcement. A well-balanced framework that combines legal, technological, and educational measures will enable India to harness the transformative power of AI while safeguarding individual rights, public safety, and democratic values.

FAQS


Q1: Are deepfakes illegal in India?
No explicit law bans deepfakes in India as of now. However, misuse of deepfakes can invite charges under defamation, IT Act provisions on identity theft, and right to privacy violations.
Q2: Can someone be punished for creating a non-consensual deepfake video?
Yes, if the content is obscene, defamatory, or violates privacy, the creator can be prosecuted under IPC sections (like 499 for defamation) and IT Act sections (like 66E for privacy violations).
Q3: What steps can a victim of deepfake take in India?
The victim can file a police complaint (cyber crime cell), seek an injunction from the court to remove the content, and claim damages under tort law.
Q4: How do other countries regulate deepfakes?
The US introduced the DEEPFAKES Accountability Act; China has issued regulations requiring clear labeling of synthetic media; the EU proposed AI Act provisions to address manipulative content.
Q5: Can platforms be held liable for hosting deepfake content?
Intermediaries have limited liability under Section 79 of the IT Act, provided they comply with due diligence requirements. However, with proposed amendments, stricter obligations may be imposed.
Q6: Are there any technological solutions to detect deepfakes?
Yes, several AI-based detection tools exist (e.g., Microsoft Video Authenticator), but they are not yet foolproof.

Leave a Reply

Your email address will not be published. Required fields are marked *