Site icon Lawful Legal

Privacy Concerns Around AI-Generated Images and Deepfakes in India

Author: Gargi Koreti, ILS college Pune


To the Point

Deepfakes and synthetic images are no longer uncommon digital tricks, they are now a pervasive daily threat. This technology allows for the rapid creation and spread of explicit content using anyone’s likeness, fabricated political speeches and widespread misinformation. Although India’s existing legal framework attempts to address these issues through privacy rights, IT laws and criminal statutes, these regulations were not designed to handle AI-powered manipulation. Consequently, the legal system struggles to keep pace with technology that is constantly evolving at a faster rate than regulation can be implemented.

Use of Legal Jargon

The legal difficulties associated with deepfakes and AI-generated images in India are examined in this article through the lens of key legal doctrines. These concepts include informational privacy, reasonable expectation of privacy, and consent-based processing. The discussion also covers data fiduciary duties, criminal impersonation, obscenity, and defamation. Furthermore, the article explores the role of electronic evidence, the criminal elements of mens rea and actus reus, safe-harbour liability and the specific issue of morphed content.


The Proof

AI systems today can generate realistic photos, videos and voice clips without needing more than a single image or short audio sample. These deepfakes are hard to distinguish from real footage, especially when they use high-quality facial mapping and voice cloning. India has already seen:

Gendered Abuse and Non-Consensual Explicit Content
Prevalence and Targeting: The overwhelming majority of deepfakes are pornographic, with women, including celebrities and private individuals, being the primary targets.
High-Profile Cases: Incidents involving prominent Indian public figures, such as actresses Rashmika Mandanna and Kajal Aggarwal, whose faces were non-consensually superimposed onto explicit content, have caused national uproar. These cases highlight the immediate and significant harm to victims’ privacy, reputation and mental health, prompting government intervention.
The ‘Nudify’ Blackmail Threat: The availability of “nudify” apps and AI tools creates a fear among Indian women of non-consensual digital manipulation. This fear fuels sextortion and blackmail and creates a “chilling effect,” causing women to restrict their online presence and digital self-expression out of self-protection.
Infringement of Personality and Image Rights
Violation of Control: Deepfakes directly violate an individual’s “personality rights” the right to control the personal and commercial use of one’s name, image, and likeness.
Key Judicial Intervention: The Delhi High Court has consistently intervened, as seen in cases like Kamya Buch v. JIX5A & Ors. The court classified the circulation of explicit, AI-manipulated images as a “patent breach” of a petitioner’s fundamental rights to privacy and dignity under Article 21, often granting injunctions for content removal.
Impersonation and Economic Fraud: Beyond explicit content, deepfakes are leveraged for impersonation (e.g., Finance Minister Nirmala Sitharaman) to execute financial scams or create fake endorsements, directly violating privacy and leading to economic damage.
Systemic Threat to Democratic Integrity and Trust
The weaponization of deepfakes to spread misinformation, manipulate public perception and interfere with electoral processes demonstrates a systemic privacy threat, where individuals’ likenesses are distorted to undermine truth and public trust.

Abstract

This article analyses how the growing use of AI-generated images and deepfakes creates severe privacy concerns in India. It explores India’s current legal response using Article 21 protections, the Information Technology Act, the Indian Penal Code, the Digital Personal Data Protection Act and judicial interpretation. It highlights enforcement challenges, discusses the gaps created by outdated laws and examines relevant Indian case law that deals with morphed content, digital impersonation, and privacy violations. While India does not yet have dedicated deepfake legislation, the existing framework offers partial remedies. The article concludes by arguing for stronger, AI-specific regulatory measures.

The escalating use of AI-generated images and deepfakes in India presents significant privacy challenges. This article analyzes India’s current legal framework, including the protections under Article 21, the Information Technology Act, the Indian Penal Code and the Digital Personal Data Protection Act, alongside relevant judicial interpretations. The analysis specifically addresses how existing Indian case law handles morphed content, digital impersonation and privacy infringements. It highlights the difficulties in enforcement and the deficiencies in outdated legislation. Although India lacks specific deepfake laws, the existing regulations provide limited recourse. The article concludes by advocating for the adoption of more robust, AI-focused regulatory measures.

Legal Position in India

1. Constitutional Safeguards

A. Privacy Under Article 21
After the landmark Justice K.S. Puttaswamy v. Union of India judgment, privacy is recognised as a fundamental right. Deepfakes directly interfere with this right by violating:

Informational privacy: misuse of a person’s biometric features, likeness, and identity
Decisional autonomy: using AI-generated visuals to manipulate personal choices
Personal dignity: especially when synthetic content is sexual or derogatory

The right to privacy includes protection against unauthorised digital replication. Using someone’s face to create fabricated videos without permission clearly breaches Article 21 protections.

B. Freedom of Speech vs. Protection of Reputation
Article 19(1)(a) protects free speech, but Article 19(2) allows restrictions based on defamation, decency, public order and integrity of the state. Deepfakes often fall within these exceptions, particularly when used for political manipulation or character assassination.


2. Information Technology Act, 2000 and IT Rules, 2021

A. Section 66E – Violation of Privacy
This covers capturing or publishing images of private areas without consent. Courts have extended it to morphed and fabricated images that cause embarrassment or humiliation.

B. Sections 67 & 67A – Obscenity and Sexually Explicit Content
These sections are regularly invoked for non-consensual explicit deepfakes. Even if the image is entirely synthetic, transmitting or publishing it with the intention of insulting dignity counts as an offence.

C. Section 66D – Impersonation Using Technology
This helps address deepfake-based fraud, such as using AI-generated voices to impersonate business executives, family members, or officials.

D. Section 69A – Blocking Power
The government can order platforms to remove or block deepfake content if it threatens public order, decency or national security.

E. Intermediary Liability – Section 79 & IT Rules 2021
Social media platforms must:
act on user complaints
remove harmful deepfakes within specific timelines
maintain due diligence protocols
Failing this, they may lose safe-harbour immunity and face legal action.

3. Indian Penal Code (IPC)
Even though the IPC predates AI, many sections are broad enough to cover deepfake harm.

A. Defamation (Sections 499–500)
Creating or sharing deepfakes that harm reputation counts as defamation, even when the content is fabricated.

B. Outraging Modesty and Voyeurism (Sections 354A & 354C)
Women are targeted disproportionately. Deepfake pornography or circulation of morphed intimate pictures often leads to charges under these sections.

C. Forgery and Harm to Reputation (Section 469)
This applies when synthetic content is created “with intent to harm reputation.”

D. Criminal Intimidation (Section 503)
Deepfakes used for blackmail, extortion, or threats fall under this.

4. Digital Personal Data Protection Act, 2023 (DPDPA)
This act is India’s newest data protection law and directly relates to AI misuse.

A. Consent-Based Processing
A person’s face, photo, or biometric data is personal data. Using it to generate synthetic content without consent violates DPDPA.

B. Data Fiduciary Obligations
Platforms and creators that process facial data or upload deepfakes become data fiduciaries. They must:
prevent misuse
protect identity
ensure lawful processing
respond to user grievances

C. Penalties
DPDPA allows heavy financial penalties for negligent or unlawful data processing.

Case Laws

India does not yet have dedicated judgments on deepfakes, but existing rulings on privacy, morphed content, and dignity provide strong judicial guidance.

1. Justice K.S. Puttaswamy v. Union of India (2017)
This landmark judgment anchors India’s privacy jurisprudence. Any non-consensual use of a person’s image can be challenged under this ruling.

2. Rupa Ashok Hurra v. Ashok Hurra (Supreme Court)
The Court emphasised personal dignity and reputation as essential aspects of Article 21.

3. Aveek Sarkar v. State of West Bengal (2014)
The Supreme Court shifted the test of obscenity to “community standards.” Deepfake pornography spread online can be assessed under this test.

4. State of West Bengal v. Animesh Boxi (2018)
The court convicted an accused for circulating morphed sexually explicit photos of a woman. Though not about deepfakes, it set a precedent for harmful digital manipulation.

5. Sharat Babu Digumarti v. Govt. of NCT Delhi (2017)
This case clarified the interplay between IPC and IT Act, holding that offences involving electronic content fall primarily under the IT Act.

6. Kerala High Court on Digital Morphing (S. Rajendran Case)
The court stressed that digital manipulation that harms dignity requires strict penal action, setting a foundation for deepfake-related cases.

Conclusion

AI-generated images and deepfakes represent a significant and rapid escalation of privacy threats in India. The ability to create sophisticated fabricated visuals, previously requiring specialized studios, is now accessible to anyone with a free app, raising profound concerns regarding individual autonomy, dignity, identity, and security.

While India’s current legal framework including the Constitution, the IT Act, the IPC, and the DPDPA offers a degree of protection, these laws were not designed to address the specific complexities of synthetic media. Consequently, they are insufficient to fully manage the scale of this problem.

To effectively safeguard privacy against the effortless fabrication of visuals, India must strengthen its legislation, improve enforcement mechanisms, and increase public awareness. Deepfakes are fundamentally a human rights issue, not just a technical challenge, necessitating the evolution of the law alongside technological advancements.

FAQs

1. Is creating deepfakes illegal in India?
Not directly, but using them to harm someone especially sexually, financially, or politically is punishable under IT Act and IPC provisions.

2. Can victims take legal action if their face is used in a deepfake?
Yes. They can file a complaint with the cyber cell or report through the National Cyber Crime Reporting Portal.

3. What charges apply for deepfake pornography?
Sections 66E, 67, 67A of the IT Act and Sections 354A, 354C, and 509 of the IPC.

4. Are platforms responsible for removing deepfakes?
Yes. Under the IT Act and IT Rules, platforms must remove harmful content once notified or risk losing protection.

5. Is India planning to create deepfake regulations?
The government has signalled interest in stronger AI regulation, but no dedicated law exists yet.

Exit mobile version