Author: Bhavya Singh, Dr. B.R. Ambedkar National Law University, Sonepat
Linkedin Profile: https://www.linkedin.com/in/bhavya-singh-136348325?utm_source=share&utm_campaign=share_via&utm_content=profile&utm_medium=android_app
Abstract
Deepfakes, a term that originated in the 21st century, are a subset of AI-generated content consisting of synthetic or false media that alter the original face, body, voice, etc., replacing them with a fake subject, typically created with malicious intent to spread false information. The misuse of this information can lead to fraud, misrepresentation, identity theft, and other similar offences.
The rapid rise of deepfakes poses a significant legal and ethical threat, particularly in relation to consent and privacy. There exists a plethora of laws, some of which cover deepfakes under their ambit. However, an unambiguous definition of laws related to deepfakes is lacking. The central issue also revolves around the attribution and jurisdiction related to deepfake creators.
This article proposes amendments to existing legislation to include the specifics of deepfakes, which could help regulate these offences more effectively. Furthermore, there is a need for public awareness and comprehension of the concept to ensure its adequate implementation.
Introduction
Deepfakes are highly realistic yet fabricated images, videos, or audio clips created using artificial intelligence (AI). The term combines “deep learning” (a type of AI) and “fake,” highlighting how advanced algorithms manipulate or generate media to make it appear authentic. Deep learning, a subfield of machine learning, utilises artificial neural networks to learn from data, drawing inspiration from the human brain and is designed to model high-level abstractions in data.
In recent years, deepfakes have surged into societal discourse, evolving from a mere curiosity of AI to an urgent societal concern. Some major misuses include fake political speeches, celebrity pornography, and viral hoaxes, which have led all sectors to grapple with these challenges. Media outlets now routinely report on deepfake scams, election interference, and reputation attacks, while debates rage over how to regulate technology that erodes the very foundation of our perceptions.
At the root of these challenges lies a fundamental question: whether the traditional laws, which primarily assume human authorship and reality, can effectively navigate the web of deepfakes. The legal framework built for human intent and verifiable facts is being tested by synthetic media that occupy the grey zone of forgery, creation, and art. To overcome these challenges, an amendment is necessary to address these issues.
Legal Doctrines under Stress: How Modern Crises Challenge Legal Foundations
Synthetic Media, as we know it today, operates in a realm where identity, authenticity, and authorship are algorithmically malleable. This discourse presents four significant challenges:
3.1 Crisis Of Authorship
Traditional copyright, defamation, and privacy laws assume a human creator behind the malicious act concerning the contested content. However, this presumption has become obsolete in this era, where accountability for an AI creation remains ambiguous.
The Copyright Act of 1957 safeguards “original work,” which refers to works created independently by human authors with a minimal level of creativity. The focus here is on AI-generated work and determining liability. Who bears responsibility: the programmer who developed the tool, the user who initiated it, or the platform that hosted it?
The discrepancies are widely prevalent, while the Judiciary has stepped in from time to time to fill the voids; however, a specifically tailored legislation to deal with deepfakes remains to be enacted. The High Court of Delhi has recently decided in Anil Kapoor v. Simply Life India & Ors. CS(COMM) 652/2023 whether personality rights extend to synthetic replicas, where the value of free expression was recognised, clearing that it would be strictly prohibited and limited if someone’s reputation or other connected traits were unfairly damaged.
3.2 Right To Privacy
The apex court, along with other legislation, held in K.S. Puttaswamy vs UOI, AIR 2017 SC, that privacy is an integral part of Article 21, defining it under three sub-points:
Bodily autonomy (control over one’s physical/digital self)
Informational privacy (protection against data misuse)
Dignity (freedom from non-consensual exploitation)
These principles clash directly with the intricacies of deepfakes and AI, while Section 66E of the IT Act, 2000, punishes capturing/sharing private acts, it doesn’t cover AI-generated fabrications of public images. Most deepfakes scrape publicly available images/voice clips from social media, interviews, etc. to clone identities. Puttaswamy’s “reasonable expectation of privacy” which is an intrinsic aspect of Right to Life in relation to personal intimacies, family life, and the sanctity of home, is highly undermined when public data is weaponised.
The Missing Links: Gaps in the existing Indian Legal Framework
The advancement of Deepfakes has exposed the shortcomings in the Indian Legal System. These shortcomings can be studied in the following sub-points:
4.1 Lack of Explicit Criminalisation of Deepfakes
There is no straightforward legislation that declares Deepfakes as an offence. The lack of penalisation of the same has resulted in the issue being covered under different, fragmented statutes. Some instances are as follows:
IT Act, 2000
Section 66E (Privacy Violation): Limited to capturing/sharing “private acts,” not AI-generated fabrications.
Section 67 (Obscenity): Covers explicit content but doesn’t distinguish between real and synthetic media.
Section 66D (Impersonation): Addresses cheating via impersonation but doesn’t cover non-fraudulent deepfakes (e.g., parody or defamation).
IPC, 1860
Section 499 (Defamation): Requires “intent to harm,” which is difficult to prove if the creator hides behind AI anonymity.
Section 420 (Cheating): Only applies if financial fraud is involved, not reputational harm.
4.2 Ambiguity in Copyright and Other Laws
Copyright Act, 1957:
Section 2(d): Requires “human authorship” – AI-generated deepfakes have no legal owner.
No protection for voice/cloning: Unlike the U.S., where publicity rights protect personas, India relies on weak passing-off or Article 21 claims.
Comparative Perspective: How Other Jurisdictions Respond
5. 1 United States: Balancing Free Speech and Targeted Regulation
The U.S. strategy regarding deepfakes addresses the conflict between First Amendment rights and the need to prevent harm. Although American law places a high value on free speech, states like California and Texas have enacted specific bans, such as making non-consensual deepfake pornography and synthetic media related to elections illegal, without enforcing broad prohibitions. Additionally, federal copyright laws and state-specific publicity rights (for example, Tennessee’s ELVIS Act) offer safeguards against unauthorised AI-generated impersonations. In contrast, India’s approach illustrates how precisely focused legislation can tackle particular issues posed by deepfakes—like election manipulation or revenge pornography—while upholding the constitutional speech protections guaranteed by Article 19(1)(a).
5.2 European Union: Transparency and Consent-Driven Governance
The EU’s GDPR and the forthcoming AI Act adopt a privacy-centric framework, requiring explicit consent for the use of biometric data (e.g., voice or face cloning) and mandating watermarking for AI-generated content. The AI Act also classifies high-risk AI systems, including deepfake technologies, subjecting them to stringent transparency and accountability rules. India’s Digital Personal Data Protection Act (2023) could emulate the GDPR’s consent requirements, while proposed reforms to the IT Rules should incorporate EU-style watermarking obligations. This approach aligns with India’s Puttaswamy privacy principles, emphasising user control over personal data in the AI era.
5.3 China: State-Enforced Traceability and Control
China imposes the strictest deepfake regulations globally, requiring visible labels on all synthetic content and real-name verification for AI tool developers and users. Violations—especially those threatening national security—carry criminal penalties. While India may not replicate China’s censorship-heavy model, it could adapt elements such as mandatory disclosure (e.g., Election Commission guidelines for political deepfakes) and intermediary accountability (e.g., amended Section 79 of the IT Act to verify uploaders). Such measures would enhance traceability without compromising democratic values, addressing India’s current enforcement gaps against viral deepfake disinformation.
Legal And Policy Challenges
The rapid rise of deepfakes highlights gaps in real-time detection capabilities. Unlike traditional media manipulation, AI- AI-generated content lacks consistent digital fingerprints, making source attribution and verification slow. Law enforcement relies on reactive forensic analysis, risking reputational harm as viral synthetic media spreads. Even advanced detection tools struggle to keep pace with evolving AI models that evade safeguards, creating a dangerous window for exploitation before deception is debunked.
Efforts to control deepfakes frequently clash with the free expression rights guaranteed by Article 19(1)(a) of the Indian Constitution. Broad restrictions may suppress legitimate creative practices; political satire, parody, and documentaries use similar synthetic media techniques. The unclear distinction between harmful deception and protected expression leads creators to self-censor amid uncertainty about compliance. The U. S. has implemented targeted laws against deepfake pornography and election interference while protecting artistic or journalistic uses.
Current protections under Section 79 of India’s IT Act complicate enforcement by shielding platforms from liability for user-generated deepfakes. This safe harbour, while promoting digital innovation, has allowed platforms to avoid adequate verification systems. Consequently, victims have limited recourse against viral synthetic media since platforms lack legal obligations for proactive detection or swift takedowns. This contrasts with the EU’s Digital Services Act, which mandates due diligence based on platform size and risk.
Deepfake dissemination exploits the internet’s transnational nature, often surpassing legal jurisdictions more swiftly than enforcement can act. A synthetic video created with foreign AI on overseas servers and shared globally presents jurisdictional challenges for Indian authorities. Mutual Legal Assistance Treaties (MLATs) are too slow for urgent deepfake cases, and the lack of international standards enables regulatory exploitation. This paralysis worsens when synthetic media targets international issues, such as diplomacy or corporate interests, necessitating unprecedented global cooperation for an effective response.
Reform Proposal and Recommendations
To effectively regulate deepfakes and AI-generated impersonation in India, a comprehensive legal and policy framework must be established.
First, clear definitions should be incorporated into the Digital Data Protection Act or amendments to the IT Act, explicitly classifying deepfakes as synthetic media created through AI/ML to falsely represent individuals without their consent, while exempting legitimate uses such as parody and satire.
A three-pronged legal test should be introduced to assess harmful intent, public interest value, and consent violations, helping courts distinguish between malicious deepfakes and protected speech. Transparency measures, such as mandatory watermarking or “AI-generated” disclaimers, should be implemented through updated IT Rules, which would require platforms to label synthetic content or face penalties.
The DPDP Act should be expanded to recognise AI-generated personal data violations, granting individuals greater control over their digital likenesses.
A hybrid liability model would hold creators directly accountable for harmful content while making platforms responsible for implementing detection tools and responding promptly to takedown requests. To foster innovation in governance, AI regulatory sandboxes could test emerging solutions such as blockchain-based authentication and consent frameworks. Complementing these measures, public education initiatives should be launched to enhance digital literacy and help citizens identify synthetic media. This multi-layered approach strikes a balance between the need for accountability and the protection of fundamental rights, drawing on global best practices while addressing India’s unique legal landscape. Immediate steps could include updating IT Rules, followed by comprehensive legislation and ultimately establishing a dedicated regulatory body to oversee AI content governance. The framework aims to curb misuse while preserving the creative and technological potential of AI applications.
Conclusion
To regulate deepfakes effectively, India must create a balanced legal framework that addresses misuse without stifling innovation and free speech. Important actions include defining deepfakes within the Digital India Act or revising the IT Act, implementing a three-part test (harmful intent, public interest, and consent) to distinguish between malicious usage and protected expression, and requiring watermarking for greater transparency. Enhancing the DPDP Act to encompass AI-generated data violations will strengthen privacy safeguards, while a hybrid liability approach ought to ensure accountability for both creators and platforms. Regulatory sandboxes can promote innovation in AI governance, and campaigns to raise public awareness must enhance digital literacy. Urgent measures, such as updating the IT Rules, should lead to thorough legislation and the formation of a specialised AI regulatory body. This strategy, inspired by global best practices, ensures accountability while fostering technological advancement, safeguarding individuals, and maintaining the creative potential of AI within India’s digital landscape.
FAQS
1. What are deepfakes, and why are they legally concerning?
Deepfakes are AI-generated synthetic media, that is, videos, images, or audio, that convincingly mimic real individuals. They pose serious legal concerns because they can be used to spread misinformation, damage reputations, impersonate identities, and violate privacy, often without legal accountability under current Indian laws.
2. Which laws in India currently address deepfake-related offences?
Deepfakes are addressed only indirectly under fragmented provisions. Furthermore, there is a lack of an unambiguous definition of laws related to deepfakes. The central issue also revolves around the attribution and jurisdiction related to deepfake creators.
3. Can deepfakes violate the Right to Privacy under the Indian Constitution?
Yes, as held in K.S. Puttaswamy v. Union of India, the Right to Privacy under Article 21 includes bodily autonomy, informational privacy, and dignity, all of which can be undermined by non-consensual deepfakes created using publicly available data.
4. How does Indian copyright law view AI-generated content?
The Copyright Act, 1957, requires “human authorship” for a work to qualify as original. Since deepfakes are machine-generated, they fall into a legal grey area with no defined ownership or liability. This makes enforcement and redress difficult.
5. What are some notable Indian cases dealing with synthetic media?
In Anil Kapoor v. Simply Life India & Ors (2023), the Delhi High Court addressed the question of whether synthetic replicas infringe upon the personality rights of the individual. The court emphasised that free expression cannot justify damage to someone’s reputation or identity, laying early groundwork for future deepfake litigation.
6. How are other countries tackling the deepfake challenge?
United States: State laws ban non-consensual deepfake porn and election manipulation while protecting artistic expression.
European Union: GDPR and the proposed AI Act mandate consent and watermarking for AI-generated content.
China: Strictest approach, real-name verification, mandatory labels, and criminal penalties for misuse.
7. What are the primary enforcement challenges in India?
Detection Lag: Deepfakes evolve faster than detection tools.
Jurisdiction: The cross-border creation and sharing of content complicate enforcement.
Platform Liability: Section 79 of the IT Act shields platforms from responsibility.
Free Speech Conflicts: Laws may unintentionally suppress satire, art, or journalism.
8. What legal reforms are being proposed to address deepfakes in India?
Define and criminalise deepfakes explicitly in the IT Act or the Digital India Act.
Introduce a three-pronged legal test to assess intent, consent, and public interest.
Amend the Digital Personal Data Protection Act (DPDP), 2023, to cover AI-generated personal data misuse.
Require watermarking or disclaimers on synthetic content.
Establish platform accountability for detection and takedown.
Launch public digital literacy programs.
9. How can India balance regulation with freedom of expression?
By adopting narrowly tailored laws, targeting harmful deepfakes (e.g., pornographic, defamatory, or election-related) while exempting satire, parody, and journalistic uses. A contextual approach, such as the U.S. model, helps maintain constitutional protections under Article 19(1)(a).
10. What role can technology and public awareness play in regulation?
Tech solutions, such as blockchain authentication and watermarking, can help trace the origin of content.
AI sandboxes can pilot these innovations under regulatory oversight.
Public education is crucial for improving digital literacy and empowering users to recognise deepfakes.