Author: Aadi Mahajan, HVPS College of Law
Abstract
The advent of artificial intelligence has transformed the digital ecosystem, enabled unprecedented innovation while simultaneously creating novel legal and ethical challenges. One such challenge is the emergence of deepfakes—synthetic audio, video, or images generated using artificial intelligence to convincingly imitate real persons. While deepfake technology has legitimate uses in entertainment and education, its misuse poses a serious threat to democracy, public trust, individual dignity, and national security. Democracies rely on informed consent of the governed, free and fair elections, and truthful public discourse. Deepfakes undermine these foundations by facilitating misinformation, electoral manipulation, character assassination, and erosion of institutional credibility. This article critically examines the phenomenon of deepfakes as a contemporary threat to democratic governance, analyses the existing legal framework in India, explores constitutional and statutory remedies, discusses relevant judicial pronouncements, and highlights the urgent need for comprehensive regulation to safeguard democratic values.
To the Point
Democracy thrives on transparency, accountability, and an informed electorate. In the digital age, information dissemination occurs predominantly through online platforms, social media, and instant messaging services. Deepfakes exploit this environment by creating hyper-realistic but false content that is difficult for ordinary citizens to distinguish from reality. A fabricated video of a political leader making inflammatory statements, a manipulated audio clip announcing false policy decisions, or a morphed image undermining a candidate’s credibility can decisively influence public opinion.
In India, where elections involve over 900 million voters and social media penetration is rapidly increasing, deepfakes pose a particularly dangerous threat. The speed at which such content spreads often outpace fact-checking mechanisms, rendering post-facto corrections ineffective. The damage caused by deepfakes is not merely reputational but systemic—weakening democratic institutions, fostering distrust in media, and delegitimizing electoral outcomes. The legal system, therefore, faces the urgent challenge of balancing freedom of speech with the need to curb malicious digital manipulation.
Use of Legal Jargon
From a jurisprudential perspective, deepfakes raise complex issues relating to constitutional freedoms, criminal liability, privacy rights, and regulatory accountability. The misuse of deepfakes implicates the right to freedom of speech and expression under Article 19(1)(a) of the Constitution, subject to reasonable restrictions under Article 19(2) in the interests of public order, decency, morality, and sovereignty.
Further, deepfakes violate the right to privacy and informational self-determination as recognized under Article 21, including the right to reputation, which has been judicially acknowledged as an intrinsic part of personal liberty. The doctrine of strict liability may become relevant in cases involving intermediary platforms that fail to exercise due diligence, while questions of mens rea arise in determining criminal culpability for creators and disseminators of deepfake content.
Additionally, deepfakes intersect with principles of electoral fairness, cyber governance, data protection, intermediary liability, and state obligation to protect democratic institutions under constitutional morality.
The Proof
Empirical evidence demonstrates that deepfakes are increasingly used as tools of political manipulation. During election cycles in several democracies, AI-generated videos have circulated depicting political leaders making false confessions, issuing fake threats, or announcing fabricated policies. In India, deepfake videos impersonating political leaders in regional languages have been shared widely to target specific voter demographics.
The danger lies not merely in individual deception but in mass psychological manipulation. Studies in cognitive science reveal that visual and audio content has a stronger persuasive impact than text, making deepfakes particularly potent. Once such content goes viral, the retraction or clarification rarely reaches the same audience, leading to irreversible public perception damage.
Furthermore, the anonymity and cross-border nature of the internet complicate enforcement. Deepfake creators may operate outside national jurisdictions, using encrypted platforms to evade detection. This creates a regulatory vacuum where democratic processes are vulnerable to technologically sophisticated misinformation campaigns.
Legal Framework in India
Information Technology Act, 2000
The Information Technology Act, 2000 (IT Act) serves as the primary legislation governing cyber offences in India. While it does not explicitly mention deepfakes, several provisions can be invoked:
Section 66D: Punishes cheating by personation using computer resources.
Section 66E: Addresses violation of privacy through capturing, publishing, or transmitting images of private areas.
Section 67 and 67A: Penalize publishing or transmitting obscene or sexually explicit content.
However, these provisions are largely reactive and inadequate to address politically motivated deepfakes that may not be obscene but are nonetheless harmful to democratic processes.
Indian Penal Code, 1860
The IPC provides remedies through:
Section 499 and 500 (Defamation),
Section 469 (Forgery for the purpose of harming reputation),
Section 505 (Statements conducing to public mischief).
Yet, these provisions were drafted in a pre-digital era and do not account for the scale, speed, and sophistication of AI-generated misinformation.
Election Laws
The Representation of the People Act, 1951 prohibits corrupt practices and undue influence during elections. However, there is no explicit regulation addressing AI-driven electoral misinformation, leaving enforcement agencies to rely on general provisions.
Case Laws
Anil Kapoor v. Simply Life India & Ors (2023, Delhi HC): A landmark case where the Delhi High Court granted an omnibus, ex parte injunction restraining defendants from using the actor’s name, likeness, image, and voice (including AI-generated deepfakes) for commercial gain without consent.
Amitabh Bachchan v. Rajat Negi and Ors (2022, Delhi HC): The court granted an ad interim in rem injunction against the unauthorized use of the actor’s personality rights and personal attributes (voice, name, image) in deepfake videos, acknowledging the threat to privacy and reputation.
Rashmika Mandanna Deepfake Case (2023, Delhi): Following a viral non-consensual sexual deepfake, this case led to the arrest of individuals under Sections 66D (impersonation) and 66E (privacy violation) of the Information Technology Act.
Slayy Point Member v. John Doe (2026, Delhi HC): The Delhi High Court directed Meta, Google, X (Twitter), and Reddit to immediately remove AI-generated obscene content targeting a female influencer, terming it a “patent breach of fundamental rights to privacy” and “defamatory”.
Comparative and International Perspective
Globally, countries are recognizing the dangers of deepfakes. The European Union’s AI Act proposes strict obligations on AI-generated content, including mandatory disclosure and watermarking. In the United States, several states have enacted laws criminalizing the use of deepfakes in elections and non-consensual pornography. These developments indicate an emerging international consensus that deepfakes require targeted legal regulation.
India, however, remains dependent on fragmented laws and advisory guidelines, underscoring the need for comprehensive legislation.
Conclusion
Deepfakes represent a paradigmatic shift in the nature of threats faced by modern democracies. Unlike traditional misinformation, deepfakes exploit human trust in audiovisual evidence, making deception more persuasive and widespread. In the Indian context, where democratic participation is vast and diverse, the consequences of unchecked deepfake proliferation could be catastrophic—undermining elections, destabilizing public order, and eroding trust in institutions.
While existing laws provide partial remedies, they are insufficient to address the scale and complexity of the problem. There is an urgent need for a dedicated legal framework that defines deepfakes, criminalizes malicious creation and dissemination, imposes obligations on intermediaries, and incorporates technological safeguards such as AI detection and content authentication. At the same time, regulation must be carefully crafted to avoid chilling legitimate speech and innovation.
Ultimately, protecting democracy in the age of artificial intelligence requires a synergistic approach combining law, technology, institutional accountability, and public awareness.
FAQS
1. What are deepfakes?
Deepfakes are AI-generated synthetic media where a person’s appearance, voice, or actions are digitally manipulated to appear real.
2. Why are deepfakes a threat to democracy?
They spread misinformation, manipulate elections, damage reputations, and erode public trust in political and media institutions.
3. Are deepfakes illegal in India?
There is no specific law banning deepfakes, but they may be punished under the IT Act, IPC, and election laws depending on their nature.
4. Which fundamental rights are affected by deepfakes?
Deepfakes affect the right to privacy, right to reputation, and the right to freedom of speech and expression.
5. What legal reforms are needed?
India needs a dedicated deepfake regulation law, stronger intermediary liability norms, electoral safeguards, and AI governance mechanisms.