DEEPFAKE TECHNOLOGY AND LEGAL CHALLENGES IN INDIA


Author: Manoj Kumar Yadav, Narayan Uccha Shiksha Sansthan Law College, Jhalwa, Prayagraj.

TO THE POINT


The rapid development of Artificial Intelligence (AI) has fundamentally changed the digital ecosystem by redefining how information is created, distributed, and perceived. One of the most disruptive uses of artificial intelligence is deepfake technology, a kind of synthetic media created by advanced machine-learning algorithms that can make remarkably convincing audio, video, and visual representations of people. Using techniques like Generative Adversarial Networks (GANs) and deep neural learning models, deepfakes can smoothly superimpose faces, change noises, and create expressions with a degree of realism that frequently makes detection by the human eye nearly impossible.
Deepfake technology has legitimate and socially useful applications, such as in movies, educational simulations, digital restoration, and accessibility aids, but its criminal misuse raises hitherto unheard-of legal and constitutional issues. The ability to create convincingly phony information has a substantial influence on public order, electoral integrity, privacy rights, dignity, reputation, and national security. Deepfakes represent a threat to democratic discourse and public trust in a country like India, where millions of people rely on social media as their primary information source and internet access is rapidly increasing.
In India, deepfake incidents involving non-consensual explicit content, financial fraud, political misinformation, impersonation, and defamation of both public authorities and private individuals have significantly increased. High-profile instances involving politicians and celebrities have brought attention to the emotional, reputational, and psychological harm that such information causes, particularly to women. India presently lacks a proper legal framework to regulate deepfake technology or to outlaw its malicious development and distribution, notwithstanding the gravity of these damages.

USE OF LEGAL JARGON


1. The Information Technology Act

The primary sources of reliance are the following clauses:
Section 66C (Identity Theft) addresses unauthorized use of an individual’s digital identity, including deepfake impersonation.
Section 66D (Cheating by Impersonation): Deceiving persons through impersonation in electronic communications.
Spreading pornographic or sexually explicit deepfake content is punishable by fines and jail time under Section 67/67A (Obscene Material & Pornography).
Intermediary Liability (Section 79 & IT Rules 2021): Intermediaries will forfeit their safe harbor protection if they fail to act appropriately after discovering illicit deepfake content.
2. The Indian Penal Code (IPC) and Bharatiya Nyaya Sanhita (BNS), 2023

The following IPC/BNS clauses are cited:
Sections 499–500 (Defamation): When deepfakes imply wrongdoing and harm someone’s reputation.
Section 463/469 (Forgery and Forgery for Reputation Damage) defines digital media manipulation as forgery.
Section 354C (Voyeurism) applies when there is non-consensual simulated explicit content.
Sections 503 (Criminal Intimidation) and 420 (Cheating) may apply to attempts at extortion or fraud using deepfakes.

3. The Digital Personal Data Protection Act of 2023 (DPDP Act)

This regulation states that biometric data—such as voice and facial identifiers—must be processed with express consent because they are sensitive personal data. Deepfakes that employ biometric traits without authorization may violate certain data protection rules. However, the DPDP Act’s enforcement is limited by its incapacity to adequately handle the synthetic generation context, where such data is algorithmically generated from public information.

4. The Copyright Act of 1957

The Copyright Act currently does not regard AI-generated deepfakes as “original works,” even though it protects both original works and derivative rights. It is extremely difficult to identify ownership and infringement in deepfake situations, especially when the modified content originates from a protected source.

THE PROOF

1. Digital Proof Verification

To identify if a digital recording is real or a deepfake created by artificial intelligence, extensive digital forensics are needed. According to the Indian Evidence Act, electronic evidence must be supported by credible forensic analysis and admissible certificates. However, deepfakes are designed to obfuscate source metadata and disrupt custody chains, which complicates judicial review.

2. Intermediary Liability and Free Speech

Section 79 of the IT Act and related IT Rules provides safe harbor to intermediaries who adhere to takedown and due diligence rules. Proposed amendments that require platforms to identify and label AI-generated content run the risk of altering the definition of safe harbor and potentially violating the Supreme Court’s decision in Shreya Singhal v. Union of India that intermediaries shouldn’t be subject to general monitoring duties.

3. Comparative International Approaches

India’s legislative shortcomings contrast sharply with the US Take It Down Act, which expressly forbids the dissemination of deepfakes without permission and mandates that intermediaries delete them within defined deadlines. Although they are not legally binding on India, such international developments provide normative frameworks for future domestic legislation.

ABSTRACT


Deepfake technology, a sophisticated kind of synthetic media created using artificial intelligence (AI) and machine learning (ML), has caused a paradigm change in the creation and consumption of digital content. Deepfakes are hyper-realistic audio, video, and image manipulations that may faithfully depict individuals saying or doing things they never did. Although deepfakes have valid applications in accessibility, education, and entertainment, they have also raised a variety of moral, legal, and constitutional concerns. This is particularly true in the Indian legal system, which currently lacks a clear regulatory framework and employs outdated cybercrime legislation. This essay examines the nature of deepfake technology, identifies legislative gaps, examines relevant court decisions and statutory requirements, highlights noteworthy Indian case references, and evaluates the evidence and procedural challenges associated with prosecuting deepfake usage. In addition to offering solutions to frequently posed problems regarding the subject, the discussion ends with policy recommendations.

CASE LAWS


1. Ahmedabad Civil Court (2025)

The Indian National Congress and its leaders were recently ordered by a civil court in Ahmedabad to remove a deepfake video of political figures that was judged to be disparaging and harmful to their reputations. The court also ordered social media providers to take down the video within a specific amount of time. The information’s potential for reputational harm was highlighted by the court.


2. The Bombay High Court Case Concerning Akshay Kumar’s Personality Rights

The Bombay High Court ordered the removal of deepfake video that breached actor Akshay Kumar’s personality rights. Because AI-generated images and videos are misleading and could endanger public safety and order, the court banned both known and unknown people from hosting them.

3. Rashmika FIR for Deepfake (2023–2024)

In reaction to a widely circulated non-consensual deepfake involving actress Rashmika Mandanna, police filed an official complaint under IPC Sections 465, 469 (forgery) and IT Act Sections 66C, 66E (identity theft and privacy breach). Arrests in January 2024 show how law enforcement is utilizing existing legislation to penalize deepfake usage.


4. Novel Legislative Initiatives

Towards the end of 2025, a Private Member’s Bill was introduced in the Lok Sabha to enforce accountability and safeguard individuals. Despite not being legally enforced, this measure demonstrates that lawmakers are paying more attention to the issue.

CONCLUSIONS


Deepfake technology is one of the most intricate and disruptive problems that artificial intelligence brings to modern judicial systems. Its rapid development in India has exposed significant weaknesses in the nation’s existing cyber-legal framework, which was unprepared to regulate synthetic media produced by artificial intelligence. Deepfakes have legitimate uses in the domains of technology, education, and creativity, but their misuse has resulted in grave violations of democratic integrity, privacy, dignity, and reputation. India’s current legal response to deepfake usage is still fragmented and reactive, based on a combination of legislation under the Information Technology Act, 2000, the Bharatiya Nyaya Sanhita, 2023, and auxiliary laws governing defamation, obscenity, impersonation, and data protection. Courts have demonstrated judicial activism in deepfake cases by granting injunctive relief and acknowledging personality and privacy rights, although these acts are mainly case-specific and remedial rather than preventive. The lack of a precise legal definition of “deepfake” and the absence of a clear criminality of its detrimental manufacture and spread continue to hinder effective enforcement and deterrence.

FAQS


1. Are deepfakes illegal in India?
A1. Deepfakes may be prosecuted under current laws such as IPC defamation, IT Act identity theft, obscenity, and privacy crimes, even though they are not expressly prohibited.

Q2. What are the effects of spreading harmful deepfakes?
A2. Penalties and jail term vary depending on the offense (up to 3 years for identity theft, up to 2 years for defamation, and up to 5–7 years for obscene content).

Q3. Are social media firms liable?
A3. If intermediaries fail to react promptly to the discovery of illicit deepfake content, they risk losing their safe harbor status under Section 79 & IT Rules.


Q4. Are there notable instances of deepfakes in India?
A4. The Ahmedabad civil court decisions mandating the removal of defamatory deepfakes, the Bombay High Court’s personality rights injunction, and the Rashmika Mandanna FIR are significant precedents.

Q5. How may victims receive recompense?
A5. Victims may file FIRs, file civil lawsuits for damages and an injunction, and approach cyber cells with forensic evidence in order to seek criminal charges under applicable statutes.

Leave a Reply

Your email address will not be published. Required fields are marked *