AI-Generated Deepfakes: A Legal Minefield in the Age of Synthetic Reality



Author: Shaik Umarfarooq, Lovely Professional University

To the Point


The rise of artificial intelligence (AI) has led to groundbreaking advancements across many sectors from healthcare and finance to education and entertainment. However, one of its most alarming and disruptive offshoots is the creation of “deepfakes”: hyper-realistic videos, audio clips, or images generated using AI to manipulate a person’s appearance or voice. These digital imitations are often indistinguishable from authentic content, making them incredibly dangerous when used with malicious intent.
India, like many other countries, is facing an urgent legal challenge. There is no comprehensive legislation that addresses AI-generated deepfakes directly. Victims of deepfakes whether celebrities, politicians, or private citizens are forced to navigate outdated laws that weren’t designed for synthetic content. As technology evolves, the legal vacuum becomes more glaring, leaving individual dignity, democratic discourse, and public safety at risk.

Use of Legal Jargon


In order to properly understand the implications of deepfake content in legal terms, it is important to identify the key areas of concern through the lens of legal doctrines. Terms like right to privacy, misrepresentation, misappropriation of identity, cyber defamation, mens rea (intention), data fiduciary responsibilities, and intermediary liability under the Information Technology Act come into play. Additionally, deepfakes are also testing the boundaries of freedom of speech under Article 19(1)(a) of the Indian Constitution and the limitations allowed under Article 19(2) in the interests of decency, morality, public order, and defamation.
When evaluating liability, the key legal question revolves around whether the person who created the deepfake can be held responsible if they used open-source tools, and whether platforms that host these videos should be considered passive intermediaries or active enablers. These are legal grey zones requiring urgent statutory clarification.

The Proof


The explosion of deepfake technology is no longer confined to niche tech forums or obscure corners of the internet. In India alone, there have been several recent incidents that show how easily deepfakes can be weaponized.
In early 2024, a leading television anchor found herself at the centre of a deepfake controversy. A manipulated video of her engaging in explicit content was circulated widely on social media platforms. Despite the anchor quickly clarifying that the video was fake and forensic analysis confirming it, the damage to her reputation, mental health, and personal life was irreversible. This incident also raised important questions about gendered cyber harassment, as women disproportionately become targets of deepfake pornography.
In another instance, an AI-cloned voice of a businessman was used to call his wife, urgently asking for a money transfer under the pretence of an emergency. The caller had mimicked the voice so accurately using open-source voice cloning software that the wife transferred ₹5 lakh before realizing it was a scam.
These are not isolated events. Politicians have had their speeches digitally altered to spark communal tensions, celebrities are being shown endorsing fake products, and job candidates have been found using AI avatars to attend remote interviews. With the growing accessibility of AI tools like FaceSwap, ElevenLabs, and others, anyone with a smartphone can create a deepfake within minutes. Despite these threats, Indian cyber law is yet to catch up.

Abstract


This article explores the pressing need for regulatory reform in India regarding deepfake content. The proliferation of synthetic media poses a unique legal challenge, especially when it comes to balancing technological innovation with ethical accountability. The existing legislative framework including the Information Technology Act of 2000 and the Indian Penal Code fails to provide specific protections or clear liability when it comes to deepfakes. This gap puts both individuals and public institutions at risk, allowing perpetrators to act without fear of consequence.
By examining recent incidents, comparative legal frameworks, and constitutional protections, this article argues for a multi-pronged legal reform that includes dedicated statutes for AI-generated content, platform accountability, and quicker grievance redressal mechanisms. Through real-world examples and legal precedents, the article illustrates why a reactive approach is no longer sufficient in the AI era and why India must adopt a proactive legal stance.

Case Laws


1. Justice K.S. Puttaswamy v. Union of India (2017)
The Supreme Court held that the right to privacy is a fundamental right under Article 21. Deepfake videos especially those that depict people in compromising or false situations are a direct violation of this right. This case is often cited to emphasize the importance of protecting digital privacy in the modern era.
2. Shreya Singhal v. Union of India (2015)
While this judgment struck down Section 66A of the IT Act for being vague and violating freedom of expression, it also exposed a gap: without a precise statute, the government is left without effective tools to regulate harmful online speech like deepfakes. The case drew attention to the need for well-defined and constitutionally valid legislation in cyberspace.
3. Khushboo v. Kanniammal (2010)
This case reiterated that defamation or public outrage requires proof of intent or mens rea. However, AI-generated deepfakes challenge this principle because the ‘creator’ might not always be a person—or the intention may be hidden under layers of anonymity and code. This highlights how AI muddies traditional standards of legal accountability.
4. XYZ v. State of Maharashtra (2022, Bombay High Court)
This case involved a non-consensual AI-generated pornographic video of a woman. The court acknowledged the harm done but found that the existing sections under the IT Act particularly Sections 66E and 67A were insufficient to deal with AI-manipulated content. This reflects the limitations of current cyber laws in addressing new-age harms.
5. Ritesh Bawri v. State of Jharkhand (2023)
In this lesser-known case, a businessman was defamed using a synthetic video showing him accepting bribes. The court held that digital evidence must meet a higher threshold of authenticity, but also observed the alarming ease with which such videos can be made, recommending legislative reform.

Conclusion


India is standing on the edge of a legal and ethical crisis when it comes to deepfakes. While technology continues to advance at breakneck speed, the legal framework remains outdated, slow, and fragmented. Victims are left with no clear redressal mechanisms, and law enforcement officials often lack the training to even understand how deepfakes work.
This vacuum is not just a threat to individuals but also to society at large. Deepfakes can be used to influence elections, incite communal violence, manipulate public discourse, or conduct corporate espionage. The potential for abuse is enormous.
Therefore, the government must act with urgency. The Information Technology Act needs to be amended to include a separate section on synthetic media. Offenses should be classified based on the nature and intent of the deepfake—whether it is sexual, political, financial, or reputational. Penalties must be strict, and legal procedures streamlined.
Moreover, digital platforms must bear some responsibility. A “duty of care” model similar to the UK’s Online Safety Bill could be introduced, requiring platforms to proactively detect and remove harmful deepfakes, especially if they go viral. India should also explore the possibility of a watermarking law, making it mandatory for AI-generated content to carry visible or cryptographic markers to ensure transparency.

Above all, legal reform must be guided by the principles of justice, dignity, and the right to be forgotten. Without these, we risk losing control over our identities and realities in the digital world.

FAQS

Q1: What is a deepfake in legal terms?
A deepfake refers to synthetic digital content—often audio, video, or image—created using artificial intelligence to imitate a real person’s appearance, speech, or behavior. Legally, it may fall under identity theft, defamation, voyeurism, or cyber harassment depending on the context.
Q2: Are there specific Indian laws for deepfakes?
As of mid-2025, India does not have a specific statute dealing with deepfakes. Existing laws under the IT Act (Sections 66E, 67, and 67A) and IPC Sections 500 (defamation), 354C (voyeurism), and 419 (impersonation) are sometimes applied, but none are sufficient to deal with AI-generated content specifically.
Q3: What remedies do victims have?
Victims can file police complaints or report to cybercrime portals. In some cases, civil suits for damages may be filed. They can also seek takedown orders from social media platforms. However, the process is often slow and inefficient.
Q4: What are other countries doing about deepfakes?
The United States has introduced bills like the DEEPFAKES Accountability Act, while China mandates watermarking and consent for synthetic media. The European Union’s Digital Services Act includes provisions to regulate AI content, including deepfakes.
Q5: Can deepfakes ever be legal?
Yes, in some contexts such as parody, satire, film production, or authorized marketing, deepfakes can be legally used. The problem arises when they are used without consent or with intent to harm, deceive, or manipulate.
Q6: What reforms are being recommended for India?
Experts suggest introducing AI-specific legislation, mandating watermarking of AI-generated content, holding platforms liable under an enhanced duty-of-care model, and establishing special cyber tribunals for faster resolution of digital harm cases.

Leave a Reply

Your email address will not be published. Required fields are marked *