Author:- Sakshi, a student of Royal College of Law.
Abstract
The rapid rise of AI-generated deepfakes, created using Generative Adversarial Networks (GANs), poses significant challenges to legal norms in India by undermining informational autonomy, personal dignity, and democratic integrity. While legislative reforms like the Bharatiya Nyaya Sanhita, 2023 (BNS) and the Digital Personal Data Protection Act, 2023 (DPDP Act) mark progress, they lack specific provisions to address AI-driven harms.
Traditional legal doctrines based on mens rea and actus reus are ill-equipped to handle autonomous or semi-autonomous systems that disseminate harmful synthetic content. Deepfakes can result in defamation, identity theft, non-consensual pornography, and electoral misinformation, yet legal redress is unclear when there is no identifiable human perpetrator.
This regulatory gap also threatens constitutional rights under Article 21 (right to privacy, dignity, and reputation) and Article 324 (free and fair elections).
A comprehensive techno-legal accountability framework is imperative, grounded in principles of algorithmic transparency, platform liability, and rights-oriented regulation. Such a framework must embed rigorous and composite liability doctrines to ensure that developers, deployers, and intermediaries are held accountable in a manner consistent with the complex socio-technical realities of AI. This alignment is essential to contemporize Indian jurisprudence with the normative and operational demands of the digital age.
Introduction: The Challenge of Synthetic Harm in a Post-Human Legal Order
The accelerating advancement of Artificial Intelligence (AI), particularly in generative technologies such as Generative Adversarial Networks (GANs), has fundamentally disrupted conventional legal frameworks centered on human intent, agency, and accountability. Among the most concerning manifestations of this disruption are deepfakes—highly realistic synthetic audio-visual content that blurs the line between reality and fabrication. In India’s constitutional and legal context, the proliferation of deepfakes presents a multifaceted challenge, implicating individual rights, democratic processes, and the integrity of legal proceedings.
Deepfakes pose a direct threat to decisional autonomy and the right to privacy under Article 21 of the Indian Constitution, particularly as articulated in the landmark Puttaswamy judgment (2017), by enabling unauthorized manipulation and exploitation of personal identity and biometric features. They also endanger the sanctity of free and fair elections under Article 324, as they can be weaponized in misinformation campaigns that distort public perception and erode informed democratic participation. Furthermore, the use of such synthetic media raises critical concerns regarding the admissibility, authenticity, and reliability of evidence, thereby undermining foundational principles of procedural justice in both civil and criminal law.
Although legislative instruments such as the Bharatiya Nyaya Sanhita, 2023, the Digital Personal Data Protection Act, 2023, and the Information Technology Act, 2000 signify incremental progress toward digital governance, they remain inadequate in addressing the unique and emergent risks associated with AI-generated content. This article contends that there is an urgent imperative to construct a holistic techno-legal governance framework—anchored in algorithmic transparency, platform accountability, and constitutional rights-based regulation—to ensure that India’s legal architecture adapts effectively to the challenges posed by rapidly evolving technologies.
Deepfake Technology: Taxonomy and Modus Operandi
Deepfakes, a sophisticated subclass of synthetic media, are generated using Generative Adversarial Networks (GANs)—an advanced deep learning architecture wherein two neural networks, the generator and discriminator, engage in a recursive adversarial loop. The generator synthesizes data resembling authentic human attributes (e.g., facial expressions, voice patterns, and gestures), while the discriminator evaluates and refines outputs by distinguishing between genuine and synthetic content. This adversarial training culminates in hyper-realistic simulations that often defy perceptual scrutiny, thereby destabilizing legal doctrines reliant on authenticity, verifiability, and source attribution.
Although the technology itself is content-neutral, its unregulated deployment has engendered serious legal and constitutional concerns across multiple domains:
- Cyber-enabled gender-based violence, such as non-consensual pornographic deepfakes, constitutes a grave infringement of the right to privacy and dignity under Article 21, further contravening principles articulated in Puttaswamy v. Union of India.
- Electoral disinformation, through the falsification of political speech, distorts democratic deliberation and subverts the right to free and fair elections under Article 324, undermining electoral integrity.
- Biometric fraud and financial impersonation exploit facial and voice recognition systems, compromising identity security, violating data protection norms, and infringing upon consumer rights.
The autonomy, opacity, and non-deterministic nature of these AI systems render traditional liability frameworks—grounded in mens rea, actus reus, and proximate causation—ineffectual. These algorithmic structures operate as black boxes, lacking transparent inputs, traceable outputs, or accountable human oversight. The resultant evidentiary and normative lacunae impede both civil liability and criminal culpability, exacerbating enforcement deficits and entrenching impunity for algorithmically mediated harm.
India’s Emerging Legal Frameworks: Insights from the BNS and DPDP Act
1 Bharatiya Nyaya Sanhita, 2023: A Superficial Codification?
Despite replacing the Indian Penal Code, the Bharatiya Nyaya Sanhita, 2023 inadequately responds to the emergent threats and harms characteristic of the contemporary techno-legal landscape
Relevant Provisions:
- Clause 69 (Cheating by personation and identity theft): May be invoked to address AI-enabled impersonation; however, it lacks the statutory precision necessary to explicitly encompass harms arising from synthetically generated content.
- .
- Clause 73 (Sexually explicit content disseminated without consent): Could encompass deepfake pornography; however, challenges remain in proving actus reus when perpetrators are anonymous or operating extraterritorially.
- Clause 356 (Defamation): retains its doctrinal anchoring in reputational harm, yet it neglects to account for impersonation through non-verbal and non-literal modalities—an increasingly prevalent feature of deepfake technologies.
These provisions reflect an analogical transplant of old offences into new contexts, without engaging the sui generis nature of AI harm.
2 Digital Personal Data Protection Act, 2023: A Proceduralist Construct?
The DPDP Act, 2023, though a significant stride in data privacy, adopts a proceduralist-consent model ill-suited to the AI context. Core limitations include:
- No recognition of derivative data rights: Deepfakes often manipulate biometric identifiers not in the original dataset.
- No right to human oversight: The Act lacks an analogue to Article 22 GDPR, which guarantees recourse against solely automated decision-making.
- Absence of synthetic data regulation: The Act governs real personal data but remains silent on data synthetically generated to mimic real persons.
Thus, the DPDP Act reflects a consent-centric, static privacy regime inadequate for the dynamic, post-consensual harms of deepfake misuse.
3 Intermediary Liability Under the IT Rules
The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, as amended, do impose due diligence obligations on digital platforms. However:
- The traceability mandate under Rule 4(2) is technologically incompatible with end-to-end encryption in generative systems.
- There is no safe harbour disqualification specific to non-removal of deepfakes, unless classified as “unlawful content” under ambiguous categories.
Hence, the IT regime enables regulatory evasion by platforms that claim agnosticism toward the synthetic nature of hosted content.
The Jurisprudence of Attribution: Who Bears the Burden?
Conventional liability doctrines are grounded in human-centric principles of mens rea (criminal intent) and actus reus (the act itself), both of which are subverted in the context of AI. This displacement raises several key legal dilemmas:
- Should developers be held liable for the unintended misuse of generative models, even if the harm was not foreseeable at the time of creation?
- Do deployers of AI systems bear an elevated standard of foreseeability under tort law, considering the autonomous and unpredictable nature of these systems?
- Can platforms be subjected to strict liability for harm propagated by algorithmic amplification, given their role in distributing synthetic media without direct involvement in its creation?
A viable remedy may lie in the adoption of a composite liability framework, wherein responsibility is apportioned across different actors in a hierarchical manner:
- Developers are held accountable through ex-ante transparency audits, ensuring that generative models undergo rigorous scrutiny before deployment.
- Deployers are liable under a reasonable foreseeability standard, requiring them to account for potential harms that could arise from AI-generated content.
- Platforms are subject to modified due diligence obligations, ensuring that they take proactive measures to mitigate the spread of harmful synthetic media.
This framework parallels the principles of strict product liability, adapting them for the context of non-corporeal, autonomous agents, thus providing a more nuanced approach to AI liability.
Comparative Regulatory Taxonomies
- European Union: A Rights-Based Model
The EU Artificial Intelligence Act designates deepfakes as high-risk AI systems, imposing a regulatory regime that mandates:
- Transparency disclosures for synthetic content, ensuring that users are informed of its artificial nature;
- Comprehensive risk assessments and third-party audits to evaluate the potential societal impact and mitigate harm;
- Human oversight obligations for critical AI applications, particularly in contexts like healthcare and law enforcement, ensuring that autonomous systems remain under human control.
In tandem with the General Data Protection Regulation (GDPR), which provides individuals with the right to object to profiling and automated decision-making, the EU establishes a robust normative ecosystem centered on informed consent, redress mechanisms, and supervisory oversight, providing a rights-based framework for AI governance.
- United States: A Patchwork Federalism Approach
In the absence of comprehensive federal AI legislation, states such as California and Texas have implemented targeted criminal statutes to regulate deepfakes in specific contexts, notably electoral manipulation and non-consensual pornography. Civil recourse is frequently pursued through:
- False light and right of publicity torts, which address the harm to an individual’s reputation and personal image;
- Lanham Act claims, which are invoked in cases of commercial impersonation and deceptive advertising practices.
However, the American model remains highly fragmented, lacking uniformity across jurisdictions, and focusing predominantly on ex post deterrence—addressing harms after their occurrence—rather than imposing ex ante design constraints to pre-emptively mitigate the risks posed by deepfakes.
- China: A Command-and-Control Regime
The 2023 Deep Synthesis Regulations in China adopt a command-and-control approach, imposing stringent regulatory measures, including:
- Labelling requirements for AI-generated content, ensuring transparency and preventing consumer deception;
- Ex ante licensing requirements for deep synthesis service providers, mandating approval before the deployment of synthetic media technologies;
- Joint liability for both platforms and content creators, holding both parties accountable for the dissemination of harmful or deceptive deepfakes.
This regulatory framework places a premium on state control over technological innovation, prioritizing compliance clarity and uniformity, though raising concerns about potential overregulation and its chilling effect on technological development.
Toward a Normative Framework for India
1 Legislative Proposals
- Enactment of a Synthetic Media Regulation Act, defining deepfakes, categorizing harms, and criminalizing malicious creation or dissemination.
- Amendments to BNS to include synthetic personation as a standalone offence, with aggravated sentencing for harm to public order or individual dignity.
- Statutory recognition of digital identity rights, safeguarding biometric likeness and voiceprint from unauthorized replication.
2 Institutional Mechanisms
- Constitution of an Algorithmic Accountability Commission empowered to:
- Mandate algorithmic impact assessments;
- Conduct AI audit certifications;
- Oversee platform-level due diligence.
- Empowerment of the Data Protection Board of India to adjudicate deepfake-related privacy violations with binding orders and cross-border enforcement mechanisms.
3 Judicial Doctrinal Development
The Indian judiciary must evolve constitutional jurisprudence to:
- Recognize a fundamental right to cognitive liberty and mental integrity under Article 21.
- Expand the reasonable restrictions under Article 19(2) to include “synthetic misinformation” as a compelling ground.
- Incorporate algorithmic fairness and explainability into the doctrine of substantive due process.
Conclusion: Regulating Synthetic Speech in the Republic of Code
India’s evolving digital constitutionalism must now confront a profound ontological transformation—the shift from the real to the synthetic—which fundamentally challenges the very core of established legal frameworks. Deepfakes and other AI-generated content underscore the inherent limitations of a rights-based model that has historically been anchored in the tangible manifestation of injury and the ability to trace harm to a specific actor. These technological innovations exploit the epistemic vulnerabilities of individuals and society at large, manipulating public discourse and creating new avenues for digital misinformation, all while further entrenching existing structural inequalities.
In this context, India’s legal apparatus faces the urgent need for a normative recalibration. The traditional understanding of harm—rooted in material, identifiable injury—fails to adequately address the intangible yet profoundly real threats posed by synthetic media. As deepfakes proliferate, they erode the trust upon which democratic processes, free expression, and public discourse depend, while facilitating the manipulation of public opinion and the erosion of democratic legitimacy. The right to information and the right to reputation—fundamental underpinnings of the Indian Constitution—are not merely threatened but actively undermined by these digital forgeries.
Therefore, India’s legal response must evolve into a forward-looking, multi-layered legal framework that integrates technological advancements with constitutional rights protection. This framework must not only harmonize the tension between individual dignity and technological innovation but also ensure that algorithmic transparency becomes a cornerstone of both public and private sector operations. Legal accountability must be firmly anchored in a paradigm where both human agents and autonomous machines are accountable for their respective roles in the creation, dissemination, and potential harm of synthetic media.
The emergent frontier of digital law demands that the absence of regulation—or worse, inaction—is not construed as neutrality, but as complicity in the perpetuation of digital harm. Silence in the face of these challenges is tantamount to acquiescence, and inertia constitutes nothing less than an abdication of the duty to protect constitutional values in the face of transformative technological disruption.
Frequently Asked Questions (FAQ)
1. What are deepfakes and why are they concerning?
Deepfakes are AI-generated synthetic media that mimic real human speech, actions, or appearances. They pose risks such as misinformation, defamation, and election manipulation, challenging traditional legal frameworks focused on real-world harm.
2. How does India address deepfakes legally?
India’s legal framework, including the Bharatiya Nyaya Sanhita (BNS) and Digital Personal Data Protection Act (DPDP Act), lacks specific provisions for AI-generated content. These laws address privacy and data protection but fail to regulate deepfakes effectively.
3. What are the main challenges in regulating deepfakes?
Current laws struggle with attribution of liability and accountability for AI-generated harm. The autonomous nature of AI systems makes it difficult to assign responsibility, and the existing legal framework is not equipped for such challenges.
4. How can India ensure accountability for AI and deepfakes?
A composite liability framework is recommended, holding developers, deployers, and platforms accountable. This includes transparency audits, foreseeability tests, and enhanced due diligence for platforms to prevent harm.
5. How does the EU regulate deepfakes?
The EU Artificial Intelligence Act classifies deepfakes as high-risk AI systems, requiring transparency, third-party audits, and human oversight. It is complemented by GDPR, which safeguards individuals’ privacy and data rights.
6. How does the U.S. regulate deepfakes?
In the U.S., states like California and Texas criminalize deepfakes in specific contexts, but the lack of federal AI regulation leads to inconsistent laws, focusing more on post-harm deterrence rather than prevention.
7. What is China’s approach to regulating deepfakes?
China’s 2023 Deep Synthesis Regulations impose labelling requirements, licensing for service providers, and joint liability for platforms and creators, emphasizing state control and clear compliance guidelines.
8. What is the proposed legal solution for deepfakes in India?
India needs a multi-layered legal framework focusing on transparency, foreseeability, and platform responsibility to address AI risks. This would align with principles of strict product liability for AI systems.