Addressing Synthetic Realities in Indian Law with Deepfakes and Due Process

Author: Anshika Rastogi, B.COM LL.B., 2nd year, Lloyd Law College, Greater Noida.

Abstract

As deepfake technology becomes increasingly accessible, its misuse poses urgent threats to individual dignity, democratic discourse, and digital security in India. This article delves into the growing legal gap surrounding deepfakes in India and emphasizes the urgent need for a fair, transparent approach—one that not only supports and protects victims but also respects the rights of those accused. Landmark rulings like Shreya Singhal v. Union of India and Puttaswamy v. Union of India, the piece explores how existing laws such as the IPC and the IT Act offer fragmented remedies ill-suited to the complexity of synthetic media. Legal principles like mens rea, cyber vilification, and platform liability are invoked to frame accountability in this evolving space. Through an analysis of rising deepfake abuse, particularly targeting women, the article calls for clearer regulations, enhanced digital forensics, institutional training, and protective mechanisms for victims without compromising freedom of expression. Ultimately, it advocates for a human-centered legal response where truth, fairness, and dignity remain legally enforceable in the face of technological distortion. 

Introduction

Imagine a clip of a political leader saying commodity stag only to discover later that it never happened, or a woman finds her face edited into an unequivocal videotape that goes viral. These aren’t wisdom fabrication fears now; they’re real-world issues of deepfake technology. In India, the abuse of deepfakes is rising sprucely, but the legal tools to fight them are yet scattered and underdeveloped. This composition takes a close look at the growing trouble, the available legal protections, and the critical need to strengthen due process to fairly manage victims, platforms, and indeed the indicted.

To The Point

Deepfakes are formally affecting real lives and choices in India. As the technology becomes easier to pierce, the pitfalls will only increase. Without strong laws, trained enforcement, and a clear legal way, justice will either be delayed or denied. 

Use of Legal Jargon

Mens rea helps explain that a deepfake’s legitimacy depends on the creator’s intention, not just the outgrowth.

Central liability is important when agitating whether social media platforms are responsible for fake content on them.

Due process frames your composition’s core concern, ensuring fairness in how deepfake cases are delved into and tried. Digital forensics is applicable in proving whether an image or videotape is fake or authentic, which is crucial in court.

Cyber vilification easily ties deepfake abuse to being detrimental to grounded laws under verification.

The nipping effect helps explain why exorbitantly vague or harsh regulations might discourage free speech, indeed when that’s not the thing.

The Proof

According to a 2024 CyberPeace Foundation analysis, deepfake cases in India have increased by 300. Over 70 of the photos were taken from women without their concurrence. At least six phoney flicks of political campaigners circulated on platforms during the 2024 Lok Sabha choices before takedown orders could be enforced. India still doesn’t have a deepfake law. The primary legal foundations include the Information Technology Act, the Indian Penal Code, and judicial precedents, none of which were originally designed to address the unique challenges posed by deepfake technology. 

What Due Process Should Look Like in Deepfake Cases

The legal principle of due process ensures that justice should be served fairly with responsibility. This implies that in deepfake cases, it should be easy and quick for victims to get legal backing. This begins with filing a formal complaint under IPC sections similar to 354A (sexual importunity) and 500 (vilification) or IT Act sections 66E (sequestration violation) and 67 (profanity). Law enforcement should give timely digital forensics to corroborate whether the content is AI-generated, and courts must be willing to issue prompt takedown orders or interim relief to help further damage. It is essential that victims, especially women, are guaranteed the right to privacy, timely legal assistance, and recourse for emotional and reputational damages. People indicted of making or participating in deepfakes, especially when their intention isn’t clear, should be treated fairly. Indicted have the right to understand the charges, respond with their side of the story, and ensure their voice is heard before any action is taken. They should also be defended from illegal media content or arrest without proper reason. The part of the bar and probing agencies is critical then. They must ensure that digital substantiation aligns with Section 65B of the Indian Substantiation Act and that disquisition procedures are balanced and not poisoned against victims or the indicted. Courts should seek expert input from digital forensic specialists before passing judgments grounded on synthetic media. Due process in this environment isn’t a luxury; it’s the only way to ensure that tech-driven crimes are met with human-centered justice. 

Case Laws

Shreya Singhal v Union of India (2015)

The Supreme Court struck down Section 66A of the IT Act, recognizing that its vague and sweeping language could silence free expression and criminalize innocent speech. By doing so, the Court stood up for the fundamental right to speak without fear in a digital age. This case is central when considering how to regulate online content without infringing on free expression. 

Puttaswamy v. Union of India (2017)

This landmark judgment affirmed that privacy is a fundamental part of life and liberty under Article 21 of the Constitution. 

Faheema Shirin v. State of Kerala (2019)

The Kerala High Court recognized that in today’s world, meaningful access to education and privacy depends on internet connectivity. By acknowledging this, the Court brought digital access within the fold of fundamental rights, reminding us that the Constitution must grow with the times to truly protect human dignity.

Sabu Mathew George v. Union of India (2018)

The court ordered platforms to proactively block illegal antenatal coitus selection content. The court created a helpful illustration by asking platforms to block dangerous content beforehand, which can also apply to deepfakes when there’s a chance to stop damage before it happens.

Conclusion

Deepfakes are getting increasingly easy to produce and more delicate to describe. The verity is that our current laws haven’t kept pace with this change, and unless we address that soon, further people could be in trouble. We need to produce fair and simple rules that make sure everyone is treated duly under the law. That includes specific laws to define and discipline vicious deepfake creation, institutional training for digital forensics, and strict (but fair) rules for content platforms. True justice rests on a profound principle: every individual, whether wronged or in the wrong, must be met with empathy, respect, and an unwavering commitment to fairness. 

FAQ

Q1. Can individuals be penalized for creating or sharing dangerous deepfakes in India? 

While there isn’t a specific law targeting deepfakes, they can still be prosecuted under existing legal provisions if they cause harm or invade someone’s privacy. Relevant sections include IPC Section 500 for defamation, IPC Section 354D for stalking, and Sections 66E and 67 of the IT Act for privacy violations and the circulation of obscene content. 

Q2. Are deepfakes allowed as confirmation in court? 

Yes, but courts bear strict checks. Digital confirmation must adhere to the requirements of Section 65B of the Indian Evidence Act and could require validation from a qualified expert.

Q3. To what extent are social media platforms accountable for harmful or misleading content? 

Under the 2021 IT Rules, platforms must act within 36 hours of being notified about dangerous content. Still, most platforms warrant tools to describe deepfakes proactively, and stronger enforcement is demanded.

Q4. Why are women more generally affected?

Over 90% of deepfake pornography reported encyclopedically uses women’s faces. These are created without concurrence, causing serious social and emotional damage. Q5. Are any reforms being planned? Several legal experts and tech panels have proposed a new Synthetic Media Regulation Act, which could define deepfakes, set up presto-track legal remedies, and bear watermarking of AI-generated content. But as of mid-2025, no similar law has been passed.

Leave a Reply

Your email address will not be published. Required fields are marked *