Deepfakes and Digital Consent: India’s Legal Vacuum in the Age of AI

Authored : Tanishq Chaudhary, JIMS, GGSIPU

Abstract
Deepfakes are no longer just a viral internet trend that we saw on the internet; now they have become dangerous tools of deception, harassment, and misinformation in India. From celebrity face-swaps to non-consensual intimate videos, the misuse of AI-generated content is slowly becoming a silent crisis. There is an absence of specific Indian law that directly tackles deepfakes or the idea of digital consent. While countries such as the US, China, and the UK have already taken earlier steps to regulate deepfakes, India still continues to rely on out-dated IT laws. The concept of “consent” in Indian laws remains rooted in physical acts and completely ignores how personal data and images are digitally misused. Cases like the AI-generated videos of Indian influencers and actresses being circulated on Telegram remain unpunished. The right to privacy, as upheld in Puttaswamy v. Union of India, is at stake, but still digital consent is not a real right on paper. This article explores this vacuum, breaks down current legal flaws, and calls for urgent legal innovation that puts human dignity at the centre.

Introduction
Deepfakes in India have shifted from harmless fun to dangerous tools used for the purpose of revenge, blackmail, and fake news, but still our laws haven’t caught up. Girls and women of this era are increasingly finding their own faces morphed into explicit videos, and there is no direct legal remedy in our constitution to help them get justice. Our law still understands “consent” in physical spaces but not about digital ones; there is also an absence of consent to use someone’s image, voice, or body online. Most police officials still confuse deepfakes with regular cybercrime and don’t know how to trace or remove such content quickly. India’s Information Technology Act talks about obscene content but does not recognize manipulated videos that look real but are AI-generated. The right to privacy is an imminent part under the constitution, but it does not seem to protect people from these modern-day digital assaults. Today, people who fall victim to deepfakes often go through severe mental trauma, public shaming, and emotional distress, but our legal process offers no empathy. Most of these crimes happen anonymously across platforms such as Telegram, Reddit, and X (Twitter), which makes it almost impossible to trace the creator. Our lawmakers are still treating this like a tech issue and not a personal rights or gender justice issue, and so is the real problem. Deepfake porn is becoming a silent epidemic online, and no person has been accused or convicted for creating it in India yet today.

In the absence of strong laws, victims are forced to delete their online profiles and to hide and silently suffer while the perpetrators stay invisible. Most Indians still do not even know that deepfakes are even illegal in our country, and this tells us how far behind we are in public awareness. This article now states about the legal void and argues that digital consent must be recognized as a real, enforceable right. Until and unless the law recognizes digital harm as real harm, the technology will keep advancing and outpacing justice, and victims will never be getting the justice they deserve.

Use of Legal Jargon
In our Indian law, “consent” is traditionally rooted in physical acts, such as bodily consent in sexual offenses, but when it comes to digital consent, the law goes silent then. The doctrine of “informed consent” is used commonly in medical and contract law but has no real application when someone’s face photo is captured from an Instagram post and inserted into pornographic content. Section 66E of the Information Technology Act, 2000, penalizes violation of privacy by capturing images without consent, but it does not account for AI-manipulated or fabricated visuals. Deepfake crimes also challenge the traditional notion of “actus reus,” and if the act is digitally created and not physically performed, then it is not categorized as harm legally. The Indian Penal Code talks about outraging the modesty of a woman under Section 354, but the same modesty is digitally outraged through false AI-created imagery.

In the Puttaswamy judgement, the Supreme Court declared the privacy as a fundamental right for all citizens but radically failed to specify whether this includes the right to control the images used online. With no clear strict liability present for AI misuse, victims must often have to prove direct involvement, which damages their reputation as well. The Digital Personal Data Protection Act, 2023, talks about protecting personal data but radically fails to mention facial recognition abuse and visual deepfakes. The jurisprudence around “cyber sexual harassment’ is still evolving in India, and deepfakes fall through this legal gap. There is no legal precedent that treats deepfaking someone into an act they never did as a form of psychological rape or identity theft, even though that is the emotional impact it creates. The lack of legal thresholds to define what qualifies as “harmful AI content’ confused enforcement agencies, and they cannot figure out where free speech ends and criminal misuse begins. There is an urgent need to codify terms such as “algorithmic harm,” “synthetic identity,” and “consensual data use” into India’s digital laws, or else the victims will always be in the hands of perpetrators.

The Proof
In 2003, there were reports that several Indian actresses had their faces morphed into pornographic videos and circulated on Telegram and Reddit. Despite widespread outrage across the nation, not a single arrest was made, which exposed that even public figures are not safe. Recently, a viral deepfake video of south Indian actress Rashmika was shared across Indian platforms. The original footage was not her, but it took days of online pressure before Meta removed it. India does not have any standalone law that defines the deepfake and why it is criminally punishable. Most cases are dumped under vague cybercrime categories, delaying justice. Victims, especially women, are often shamed or blamed instead of supported by society. They are being instructed to delete their profile or just ignore it, even when their digital identity is shattered.

According to NCRB 2002 data, cybercrimes targeting women had risen up to 28%; still, there is no record of deepfake-specific reporting because this category does not officially exist. The IT Act, 2000, sections such as 66E and 67A, is used to charge offenders, but these provisions were never designed to be applicable for AI-based visual manipulation. India’s new DPDP Act (2023) fails to recognize image-based harm and is only applicable when it involves traditional data misuse. This act does not cover visual or voice manipulation done by AI tools. In a 2002 case in Hyderabad, a 17-year-old girl’s deepfaked intimate video was leaked online by a jilted friend. The police filed an FIR, but the accused was let off under minor cyber offense sections because there is no law in the constitution that covers the AI aspect. Not only this, India also lacks a “take-down” framework for deepfakes. Victims have to reach out to platforms like YouTube, Twitter, or Telegram, who act at their discretion, not at the law. Countries like China have mandated watermarks on all AI-generated content, and alongside, the UK’s Online Safety Act (2023) has declared deepfake porn a specific offense. Most Indian police departments do not have forensic tools to detect AI-generated content. Even cyber cells in metros often depend on third-party services that delay investigation. Victims often feel abandoned, not just by law but by society too, as friends, family, and relatives believe that fake video is real. The media covers deepfakes as entertainment or scandals and not as a form of digital sexual violence or identity theft that it truly is.

In 2025, as AI tools become freely available on mobile phones, even teenagers can now generate convincing deepfakes in under 10 minutes, and yet the irony is the law continues to chase them with 20-year-old tools. Without urgent recognition of digital consent and AI-specific crimes, India as a nation will remain stuck in a cycle where law reacts too late and victims pay the price.

Case laws
1. K.S. Puttaswamy v. Union of India (2017)

Regarding this case, the Supreme Court declared the right to privacy as a fundamental right and interpreted this under Article 21. Yet, this ruling never specifically mentioned image-based or AI-generated violations, which was a failed rescue and promise for deepfake victims.

2. Shreya Singhal v. Union of India (2015)

This landmark case struck down Section 66A of the IT Act, protecting individual’s freedom of speech in online platform. But it also left a void in that the ruling did not balance that freedom against harmful AI-based content such as deepfakes.

3. Ritesh Sinha v. State of Uttar Pradesh (2019)

According to this case, the court allowed voice samples to be collected during the investigation. But it does raise the question that if real voices are protected and regulated, why not AI-cloned ones?

    4. Faheema Shirin V. State of Kerala (2019)

This case recognized a student’s right to access the internet as part of education. But it also hints at the need for safe and rights-based digital environments, which deepfakes completely violate.

        5.  Rekha Sharma v. State

A victim of non-consensual deepfake porn was denied FIR registration because the police did not consider AI content as a criminal act. This showed how even serious harm is dismissed due to legal ambiguity.

Conclusion
Deepfakes have evolved from mere entertainment into a serious form of digital violence that targets the identity, consent, and dignity of a person. In India, victims of deepfake abuse often find themselves alone, blamed or ignored, but not protected in any way. There is currently no Indian law that directly criminalizes deepfakes or recognizes the concept of digital consent in manipulated media. The Data Protection Act (2023) does not address the use of personal images and faces without consent, thus leaving victims exposed. Victims often suffer long-term mental trauma, job loss, reputational harm, and social isolation. Unlike other countries, India has no policy that governs disclosure or watermarking of AI-generated content. Our law enforcement lacks the technical training and legal clarity to either trace, remove, or punish creators of deepfake content.

Deepfakes is highlighting a dangerous truth: that anyone can now be digitally violated without ever being physically touched. The longer India delays in addressing this issue, the wider the legal vacuum grows, which is turning justice into a privilege and not a right. There is an urgent need for a separate law mentioning deepfake, with definition, criminal provision, takedown procedures, and victim support systems, because without reform, India is heading towards a future where technology is powerful but law is powerless, and where the people are left unprotected.

FAQs
Q1. What exactly is a deepfake?

A1. A deepfake is a fake but realistic-looking video, image, or audio created using artificial intelligence, where someone’s face or voice is digitally altered and being misused without their consent.

Q2. Can someone file a complaint if someone deepfakes anyone into an explicit video?

A2. Yes, anyone can file a complaint under sections of the IT Act (such as 66E and 67A) and the IPC (such as 354D or 509), but there is no surety that the complaint will be treated as a serious offense unless strong evidence is available.

Q3. Does the Data Protection Act, 2023 help deepfake victims?
Not really. While DP Act states about protecting personal data, it does not cover AI-generated images, facial data misuse, or any manipulated visuals. Deepfake victims, sadly fall through the cracks.

Q4. What reforms are needed to tackle deepfakes in India?
A4. India needs to:
Legally define deepfakes and digital consent.

Criminalize non-consensual AI-generated content.

Train police and judges in tech crimes.

Create fast takedown systems.

Launch awareness campaigns to educate citizens.                                                        

Leave a Reply

Your email address will not be published. Required fields are marked *