Regulating AI-Generated Deepfakes: Balancing Free Speech and Protection from Harm in the Digital Age



Author: Nehal Saxena, Christ University, Delhi NCR

To the Point


The emergence of artificial intelligence (AI) has introduced complex legal challenges, particularly with the rise of deepfakes—highly realistic but manipulated audio-visual content. While deepfake technology offers creative applications in filmmaking and education, its misuse has grown dangerously common. From political misinformation and identity theft to non-consensual pornography and financial fraud, deepfakes pose serious threats to individual dignity, democratic discourse, and national security. As of right now, deepfake production and distribution are not specifically addressed by any laws in India. Although certain provisions like Sections 499 and 500 of the IPC (defamation), Section 66E of the IT Act (privacy violation), and Section 67 (obscenity) offer partial redress, they fall short in tackling the full scope of harm caused by AI-generated synthetic media.

Regulatory efforts are made more difficult by the conflict between enforcing reasonable limits under Article 19(2) and defending freedom of expression under Article 19(1)(a). Globally, countries like the United States have begun implementing deepfake-specific laws, especially in contexts like elections and pornography.

However, India still lags behind in establishing a comprehensive legal framework. To address this gap, there is a pressing need for specific legislation that defines and penalizes malicious deepfake production and distribution.

Additionally, AI watermarking techniques, mandatory disclosure norms, platform accountability under the 2021 Intermediary Rules, and stronger public awareness are critical tools in combating the misuse of deepfakes. Legal frameworks must be future-ready and technologically informed to preserve innovation while preventing harm. In the long run, regulating deepfakes is not about restricting technology—it is about upholding truth, protecting citizens, and maintaining trust in the digital ecosystem.

Use of Legal Jargon


In addressing the legal regulation of AI-generated deepfakes, several foundational legal terms and doctrines become critical. Essential to this is the freedom of speech and expression principle, which is protected by Article 19(1)(a) of the Indian Constitution. However, this right is not absolute and is subject to reasonable restrictions under Article 19(2), including interests of public order, decency, defamation, and sovereignty of the State—grounds directly relevant to deepfake misuse.


The mens rea or mental element behind the creation and dissemination of deepfakes becomes important when establishing criminal liability, particularly in cases involving defamation, fraud, or obscenity under the Indian Penal Code and the Information Technology Act, 2000. Courts may apply the doctrine of proportionality to assess whether restrictions on speech imposed by anti-deepfake regulations are justified and necessary in a democratic society. The intermediary liability framework under Section 79 of the IT Act places conditions on platforms hosting user-generated content. Their compliance with due diligence obligations under the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 is essential to claim safe harbour protection from prosecution.


Deepfakes that impersonate or misrepresent individuals may violate their right to privacy, a fundamental right recognised in Justice K.S. Puttaswamy v. Union of India (2017) 10 SCC 1. Moreover, the unauthorised use of a person’s likeness or voice may give rise to claims under the right of publicity and moral rights in copyright law. Legal regulation of deepfakes must therefore balance constitutional rights with the state’s compelling interest in preventing harmful expression, ensuring that laws are narrowly tailored and not overbroad.


The Proof


Imagine receiving a video that looks and sounds exactly like your friend or public figure saying something shocking—only to later find out it was completely fake. That’s the terrifying reality of deepfakes. These AI-generated videos, audios, or images are becoming so real-looking that even tech experts struggle to tell them apart from genuine content. In 2020, a journalist in Delhi discovered her face morphed into an explicit video, which spread rapidly online. Despite her innocence, the social damage was already done. In another case, a political deepfake of a prominent leader speaking in a local dialect during an election misled thousands of voters.

Such incidents are not just rare one-offs—they’re part of a fast-growing global trend. A 2023 report by cybersecurity firm Sensity AI revealed that deepfake content grew by over 900% in just a few years. Shockingly, the vast majority of these targeted women—often using their images without consent for non-consensual pornography. In business, scammers are now using deepfake voice calls to impersonate CEOs and steal money from companies. These stories show us that the dangers of deepfakes aren’t just theoretical—they’re real, personal, and deeply harmful. Victims are often left without proper legal remedies, while perpetrators hide behind fake accounts or jurisdictions beyond India’s legal reach. Even social media platforms struggle to act fast enough. These examples prove that our current laws are not enough. As technology races ahead, the law must catch up—not just to punish wrongdoers but to protect people from being digitally manipulated, emotionally scarred, or reputationally destroyed. If we wait, the cost won’t just be legal—it’ll be human.

Abstract


In today’s digital world, what we see and hear can no longer be trusted at face value. Deepfakes—videos, sounds, or images that have been altered to appear alarmingly real—are making it harder to distinguish between fact and fiction as Artificial Intelligence (AI) advances quickly. While AI-driven content creation can be used for entertainment, art, or education, it also has a dangerous side. Deepfakes have already been used to spread political misinformation, impersonate public figures, blackmail individuals, and target women in non-consensual explicit content.


This article explores the urgent need to regulate deepfakes in India, a country with over 800 million internet users and growing digital engagement. While freedom of speech is protected under Article 19(1)(a) of the Constitution, this right is not unlimited. The misuse of deepfake technology calls for a legal balance—ensuring that free speech is respected but not abused to harm others. Currently, India has no specific law dealing with deepfakes.

Existing laws under the Information Technology Act, Indian Penal Code, and Copyright Act offer some relief, but they are scattered and reactive.

Through case examples, global comparisons, and constitutional analysis, this paper argues for a more focused and humane approach to regulating deepfakes. It emphasizes the importance of consent, privacy, and dignity—values rooted in Article 21 of the Constitution. At the same time, it respects the role of satire, creativity, and freedom of expression in a democracy.


Ultimately, the article calls for a law that doesn’t fear technology but understands it—one that protects people without silencing legitimate voices. Because in an age where AI can fake almost anything, it’s time our laws become smart enough to tell the difference.


Case Laws


The legal questions raised by deepfakes revolve around privacy, dignity, free speech, and accountability. While India doesn’t yet have a case directly involving deepfakes, several landmark judgments lay the foundation for how courts might approach the issue. The Supreme Court ruled in Justice K.S. Puttaswamy (Retd.) v. Union of India (2017) that the right to privacy is a basic right under Article 21. This case is among the most significant. The Court recognised that every person has control over their identity, personal space, and the way they are represented. This case is crucial for deepfake regulation because when someone’s face or voice is used without consent, it’s a clear invasion of privacy.


The Court therefore declared in Shreya Singhal v. Union of India (2015) 5 SCC 1 that ambiguous limitations on online speech are in violation of Article 19(1)(a) of the Freedom of Expression, and hence invalidated Section 66A of the IT Act.

However, the Court also acknowledged that reasonable limits can be placed on speech to prevent harm. This sets a precedent for regulating harmful deepfakes without restricting genuine expression.


The Court reiterated that the right to life is fundamentally based on dignity in Common Cause v. Union of India (2018). Deepfakes, especially when used for defamation or revenge, directly undermine this dignity. Also notable is Navtej Singh Johar v. Union of India (2018) 10 SCC 1, where the Court spoke about constitutional morality—the idea that laws should reflect values like justice, equality, and respect for individual choice. This concept can help shape how we judge harmful uses of AI without stifling innovation.


Together, these cases offer a strong constitutional base to demand legal protections against deepfakes—showing that while technology changes, our rights must still be respected.

Conclusion

Drawing the Line Between Innovation and Harm


We live in a time where technology can create wonders—but also illusions. Deepfakes, born out of advanced Artificial Intelligence, are a prime example of this dual nature. On one hand, they offer creative and educational possibilities. On the other, they pose serious risks—misusing someone’s identity, damaging reputations, manipulating elections, or spreading false narratives. In such a world, the law cannot afford to stay silent or outdated. This article has explored the growing influence of deepfakes and how they collide with core legal values like the right to privacy, freedom of speech, and human dignity. While India currently uses a patchwork of laws to respond to deepfake misuse, it lacks a clear, specific legal framework that understands the technology and its implications. As victims struggle for justice and platforms remain under-regulated, a legal vacuum continues to grow.
What India needs is not fear-driven censorship, but smart, rights-based regulation. A future-ready legal framework must begin by defining deepfakes, laying down rules for consent, mandating disclosure, and placing accountability on creators and platforms alike. Importantly, such a law should not blanket-ban innovation or artistic expression but distinguish between harmful misuse and harmless creativity. Satire, parody, and political speech must be protected—but not at the cost of another person’s dignity or safety.
Deepfakes raise one of the most difficult questions in modern law: How do we balance technological freedom with emotional and legal responsibility? The answer lies not in choosing one over the other, but in crafting a fair middle ground. India must act now—not just to punish the wrongdoers, but to protect the innocent. Because in a world where even reality can be faked, trust becomes the most precious asset. And the law must play its part in preserving that trust—for citizens, for democracy, and for the future.

FAQS

What exactly is a deepfake?
A deepfake is a video, audio, or image that’s been altered using artificial intelligence to make it look or sound like something it’s not—like a person saying or doing things they never actually did. While it can be used creatively, it can also be harmful when used to deceive or defame.
Is creating or sharing a deepfake illegal in India?
Not always. If a deepfake is made for entertainment or parody and doesn’t harm anyone, it might not be illegal. But if it’s used to impersonate, harass, spread false information, or cause emotional or reputational damage, then it can attract legal action under various laws like the IT Act or BNS.


Can someone go to jail for making a harmful deepfake?
Yes. If the deepfake involves criminal offences—like obscenity, defamation, fraud, or invasion of privacy—the creator can face serious penalties, including imprisonment. Laws like Section 66E and Section 67 of the IT Act may apply.
How to act if I become a victim of a deepfake?
Act quickly. You can file a complaint with the local cybercrime cell or police station. Also report the content on the platform where it’s posted. Keeping evidence (screenshots, links, messages) is important. You can also seek legal help to get the content taken down and pursue legal remedies.


Are social media platforms responsible for deepfake content?
They’re expected to act responsibly. Under Indian law, platforms are given “safe harbour” protection—but only if they follow due diligence. If they ignore complaints or delay action, they can be held accountable. While it can be used creatively, it can also be harmful when used to deceive or defame.

Leave a Reply

Your email address will not be published. Required fields are marked *