Author: Durvankur Manjrekar, School of Law and Public Policy, Avantika University
Abstract
This article steps into the unsettling heart of a modern dilemma: the escalating legal and regulatory challenges posed by deepfakes and other forms of synthetic media in India. It thoughtfully examines how our existing legal tools, like the Information Technology Act and the Indian Penal Code, often fall short in adequately addressing the profound harms these AI-generated fictions can inflict, from poisoning public discourse with misinformation to robbing individuals of their digital identity and dignity through malicious content. We’ll explore how courts are striving to offer some solace through principles like personality rights and the fundamental right to privacy. The discussion will also delve into the urgent whispers and determined shouts from government bodies and the public, all calling for specialised laws. But it’s a tightrope walk: how do we staunch the flow of digital deception without stifling the very freedom of speech that defines our democracy? This piece concludes by mapping out a comprehensive, adaptive strategy for India to navigate this complex, often personal, intersection of groundbreaking technology, evolving law, and our deepest constitutional values.
To the Point
Imagine a world where your face, your voice, your very essence, can be cloned and manipulated by a few lines of code, then unleashed upon the internet as a lie. This is the reality that deepfakes, products of rapidly advancing Artificial Intelligence, are creating in India. These eerily convincing synthetic images, videos, and audio clips are increasingly becoming weapons, deployed for everything from tarnishing reputations and spreading insidious misinformation to deeply personal privacy invasions and sophisticated financial frauds. Our current legal frameworks, though valiant, are like grappling hooks against a rising tide; designed for a different era, they often lack the precision and broad reach needed to confront the sheer scale, terrifying sophistication, and lightning-fast spread of these digital doppelgangers. This isn’t just a legal debate; it’s an urgent, national conversation. We need a robust, dedicated legal shield that can hold deepfake creators and spreaders accountable, ensure that online platforms act swiftly to dismantle these digital fictions, and empower every citizen with the knowledge to discern truth from AI-generated illusion. All this, while carefully guarding the sacred constitutional right to freedom of speech and expression – a delicate, vital balancing act.
Use of Legal Jargon
To navigate this complex digital landscape, we’ll employ precise legal language, serving as our compass. Key terms will include: deepfakes (the fabricated reality), synthetic media (the broader spectrum of AI-generated content), generative AI (the creative engine behind it all), misinformation (unintentionally false information) and disinformation (deliberately false information), identity theft (stealing who you are online), defamation (damaging reputations), obscenity (morally offensive content), non-consensual intimate images (NCII) (deeply personal and violating content created without permission), personality rights (your right to control your own image), right to privacy (your digital sanctuary), freedom of speech and expression (Article 19(1)(a) of the Indian Constitution) (the cornerstone of our democracy), reasonable restrictions (Article 19(2)) (the necessary boundaries), intermediary liability (the responsibility of social media platforms), due diligence (their obligation to act with care), data fiduciaries (those who control your data) and data principals (you, the individual), digital watermarking (a digital fingerprint for AI content), traceability (the ability to follow its origins), cyber terrorism (digital attacks threatening national security), injunctions (court orders to stop something), and ex-parte orders (orders issued without hearing all parties). We’ll also refer to foundational Indian laws: the Information Technology Act, 2000 (IT Act), the Indian Penal Code (IPC), and the Digital Personal Data Protection Act, 2023 (DPDP Act) – pieces of our legal puzzle.
The Proof
The unsettling rise of deepfakes isn’t merely a theoretical concern; it’s a lived reality, with numerous high-profile incidents shaking India and galvanising our government into action. A stark wake-up call arrived in November 2023 with the Rashmika Mandanna deepfake case. The creation of a non-consensual intimate image (NCII) using the actress’s face sent shockwaves, laying bare the chilling ease with which such violations can occur and the devastating blow they inflict on an individual’s privacy and dignity. This incident, splashed across national media, directly spurred the Ministry of Electronics and Information Technology (MeitY) to issue forceful advisories to social media intermediaries – first in December 2023, then a sharpened version in March 2024. These advisories, based on the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, clearly stated that failing to remove deepfakes and other illegal content within 36 hours of notification would result in legal action under Section 79 of the IT Act.
Our courts, too, are grappling with this new frontier, offering vital “proof” of the legal system’s engagement:
Case Laws
The Delhi High Court’s landmark judgments in cases involving Anil Kapoor v. Simply Life India & Ors. (2023) and Jackie Shroff (Jaikishan Kakubhai Saraf Alias Jackie Shroff v. The Peppy Store & Others, 2024) stand as beacons. In both instances, the Court decisively issued injunctions, safeguarding the actors’ cherished personality rights against the unauthorised use of their likeness and voice, even in deepfake form, for commercial gain. These rulings are more than legal victories; they are a testament to the judiciary’s agility in extending existing principles to protect individuals from these novel technological incursions, acknowledging the very real economic and reputational harm they inflict.
The Rajat Sharma v. Union of India & Others (2024) case saw the Delhi High Court swiftly mandate the removal of a deepfake video that falsely portrayed the esteemed journalist endorsing a fraudulent scheme. This ruling highlights the judiciary’s strong recognition of deepfakes as tools of deception that can undermine public trust and influence narratives.
In the recent case of National Stock Exchange of India Ltd. v. Meta Platforms, Inc. & Others, the NSE obtained an injunction against Meta concerning deepfake videos targeting its senior officials. This development underscores how current legal frameworks—particularly those related to defamation and identity misappropriation—are being actively employed to counter the immediate harms posed by deepfake content.
In its recent ruling on W.P.(C) No. 300/2025—Narendra Kumar Goswami v. Union of India & Others—the Supreme Court declined to entertain a PIL seeking comprehensive regulation of deepfakes. Instead, it directed the petitioner to approach the Delhi High Court, signalling a deliberate judicial strategy: recognising the seriousness of the issue while favouring a focused and ongoing deliberation within a forum already engaged with such matters.
As of July 2025, the Karnataka government’s proposed legislation to impose jail terms for spreading “fake news” and other misinformation, though broad, clearly signals a rising determination at the state level to rein in digital chaos. While it has understandably raised eyebrows among free speech advocates for its potentially sweeping definitions, it unmistakably mirrors the escalating alarm and the push for stronger controls. This sentiment echoes the global conversation, with voices from Bengaluru urging India to consider laws akin to Denmark’s, which grants individuals “copyright over their faces and voices.” This is more than a legal debate; it’s a profound societal demand for rightful ownership and protection of our digital selves.
These incidents and the judiciary’s evolving responses collectively paint a vivid picture: a purely reactive approach, relying on general laws, is not enough. They underscore the pressing, undeniable need for a dedicated, comprehensive legal shield in India to protect its citizens from the digital shadows cast by deepfakes.
Conclusion
The unsettling emergence of deepfakes and synthetic media presents India with an undeniable and deeply personal legal challenge, demanding a comprehensive and finely tuned response. While our existing legal arsenal—the IT Act, the IPC, and the budding DPDP Act—offers some limited avenues for recourse against the harms deepfakes inflict, their fragmented nature, designed for a different digital age, renders them increasingly inadequate against the sheer technological sophistication and pervasive impact of AI-generated content. Yet, our courts, particularly the Delhi High Court, have shown remarkable foresight, bravely applying principles of personality rights and privacy to offer immediate relief and forge crucial precedents, demonstrating a judiciary that adapts and responds.
However, a more robust, forward-looking legislative blueprint is not just advisable; it’s imperative. This new framework must be precise: clearly defining deepfakes and their malicious uses, laying down unequivocal obligations for social media intermediaries to swiftly identify and remove harmful content, and perhaps even mandating transparency through digital watermarks or clear labels for all AI-generated content. Beyond legal texts, we must bolster the forensic capabilities of our law enforcement agencies and ignite a nationwide awareness campaign to empower every citizen to recognise and resist digital deception. Crucially, any new legislation must embark on a meticulous tightrope walk, balancing the vital imperative to combat harmful synthetic media with the fundamental, non-negotiable constitutional guarantee of freedom of speech and expression (Article 19(1)(a)). The true art will lie in crafting laws that effectively curb disinformation and shield individual dignity without inadvertently stifling legitimate artistic expression, honest satire, or creative parody. India’s path forward in tackling deepfakes hinges on building a forward-looking legal framework—one that blends technological expertise with a strong commitment to individual rights.
FAQS
Q1: What’s the legal concern around deepfakes in India?
Deepfakes pose threats like misinformation, defamation, identity theft, non-consensual intimate content, and risks to public order and security.
Q2: Is there a specific deepfake law in India?
No dedicated law exists yet. But the IT Act, IPC, and Digital Personal Data Protection Act offer partial safeguards. New legislation is under discussion.
Q3: How do personality rights help?
They protect a person’s image, name, and voice, especially from unauthorised commercial use. Courts now allow injunctions to block harmful deepfakes.
Q4: What duties do social media platforms have?
As intermediaries, they must act fast—within 36 hours—after content is flagged. If they fail, they lose their legal immunity under IT Rules 2021.
Q5: How does regulation balance free speech?
Laws must carefully target harmful deepfakes without stifling satire, parody, or artistic expression. Article 19 allows restrictions, but only within reason.