Legal Responses to Deepfakes and AI-Generated Misinformation in India: Challenges and Emerging Regulatory Frameworks


Author: Hitesh Dhamat, National Law University Tripura


To the Point


The rapid proliferation of deepfakes and AI-generated misinformation in India has exposed critical gaps in the country’s legal and regulatory frameworks. Deepfakes, which leverage advanced AI to create or manipulate media, have become a tool for malicious actors to spread false narratives, commit financial fraud, invade privacy, and undermine democratic processes. High-profile incidents involving celebrities, politicians, and public figures have underscored the urgency of addressing these technologies. India’s existing legal arsenal, while partially applicable, struggles to keep pace with the scale and sophistication of AI-driven harms.


Challenges in Addressing Deepfakes in India
Absence of Dedicated Legislation: India lacks a specific statute targeting deepfakes or AI-generated misinformation. The IT Act, 2000, addresses related offenses under Sections 66D (punishment for cheating by personation using computer resources) and 66E (violation of privacy), while the IPC covers fraud (Section 420), forgery (Section 468), and defamation (Section 499). However, these provisions predate AI technologies and are ill-equipped to address the unique characteristics of deepfakes, such as their viral dissemination and difficulty in attribution.
Complexities of authority: Deepfakes often cross national borders, which raises concerns regarding the relevant legal and enforcement authority. A deepfake created in one country and hosted on a server in another, yet impacting Indian citizens, creates a legal quagmire. India’s Mutual Legal Assistance Treaties (MLATs) with other nations are often slow and ineffective for cybercrimes involving AI.

Enforcement Gaps and Intermediary Liability: Section 79 of the IT Act provides online intermediaries with safe harbour protection, insulating them from responsibility for user-generated content as long as they take prompt action to remove illegal materials after being notified. The IT Rules, 2021, mandate removal of harmful content, including deepfakes, within 36 hours of reporting. However, inconsistent enforcement and platforms’ reliance on automated moderation often fail to address deepfakes effectively.


Impact on Electoral Integrity: The 2024 Lok Sabha elections highlighted the disruptive potential of deepfakes, with manipulated videos of political leaders and celebrities spreading misinformation. Fact-checking units struggled to counter these narratives, and the Election Commission of India (ECI) issued advisories, but enforcement remained limited.


Societal and Cultural Sensitivities: In India’s diverse socio-cultural context, deepfakes targeting women, marginalized communities, or religious figures can inflame tensions and incite violence. Non-consensual deepfake pornography, often targeting women, raises significant gender-based concerns, with victims facing social stigma and limited legal recourse.

Emerging Regulatory Frameworks in India
India is grappling with the deepfake challenge through a combination of existing laws, proposed legislation, judicial interventions, and industry-led initiatives. While a comprehensive framework is still under development, several measures signal progress:

IT Act and IT Rules, 2021: The IT Act’s Sections 66D and 66E address impersonation and privacy violations, respectively, while Section 67 penalizes obscene content, which may apply to non-consensual deepfake pornography. The IT Rules, 2021, under Rule 3(1)(b), require intermediaries to remove content that is defamatory, obscene, or invasive of privacy within 36 hours. The Ministry of Electronics and Information Technology (MeitY) issued an advise in December 2023 advising platforms to quickly identify and eliminate deepfakes, as non-compliance could result in the loss of safe harbour.


Proposed Digital India Act (DIA): The DIA, still in the consultation phase as of July 2025, aims to replace the IT Act and address emerging technologies like AI. It is expected to include provisions criminalizing malicious deepfakes, imposing fines on creators and platforms, and mandating transparency for AI-generated content. Public consultations have emphasized technology-neutral rules to ensure future relevance.


Judicial Activism: Indian courts have creatively applied existing laws to address deepfakes. The Delhi High Court’s ruling in Anil Kapoor’s case (2023) recognized personality rights as a protectable interest against deepfake misuse, setting a precedent for civil remedies. Courts have also granted injunctions and damages under tort law for defamation and privacy violations.


Industry and Civil Society Initiatives: The Misinformation Combat Alliance’s Deepfakes Analysis Unit (DAU), launched in 2024, enables public reporting of suspected deepfakes via WhatsApp, enhancing fact-checking during elections. Tech giants like Meta, Google, and Microsoft are investing in detection tools, watermarking technologies, and content labeling to flag AI-generated media. For example, Google’s SynthID detects AI-generated content by attaching digital watermarks.


rules from the Election Commission: In 2024, the ECI released rules advising political parties to abstain from employing deepfakes during campaigns.It also collaborated with fact-checkers and platforms to monitor and remove misleading content, though enforcement challenges persist.
Public Awareness Campaigns: MeitY and the Press Information Bureau (PIB) have launched campaigns to educate citizens about deepfakes, encouraging critical media consumption. Prime Minister Narendra Modi has publicly highlighted the risks of deepfakes, urging vigilance.
International Collaboration: India’s participation in the Global Partnership on Artificial Intelligence (GPAI) and engagement with bodies like the OECD reflect efforts to adopt global best practices. The EU’s AI Act, with its risk-based approach to regulating AI systems, serves as a model for India’s proposed DIA.

The Proof


The scale of the deepfake problem in India is evident from high-profile incidents and empirical data. In November 2023, a deepfake video of actress Rashmika Mandanna, depicting her in a compromising situation, went viral, prompting arrests under the IT Act and public outrage. In 2024, deepfakes of industrialists Ratan Tata and Narayana Murthy falsely endorsing investment scams defrauded victims of millions, highlighting financial risks. During the 2024 Lok Sabha elections, manipulated videos of Bollywood actors Aamir Khan and Ranveer Singh criticizing political leaders, and a deepfake “resurrecting” the late Tamil Nadu Chief Minister J. Jayalalithaa, underscored the threat to electoral integrity. A deepfake of Home Minister Amit Shah falsely claiming reservation policy changes further fueled controversy, leading to arrests.A 2023 survey by LocalCircles found that 86% of Indian respondents believe deepfakes pose a significant threat to elections, while 72% reported encountering AI-generated misinformation online. The Deeptrust Alliance’s 2020 report warned that accessible AI tools, such as open-source GAN models, have democratized deepfake creation, with India ranking sixth globally in deepfake susceptibility. Legal scholars like Apar Gupta and Shehnaz Ahmed argue that India’s fragmented legal framework struggles to address the speed and scale of AI-driven harms, necessitating urgent reforms. The Reserve Bank of India (RBI) reported a 15% rise in cyber-enabled financial frauds in 2024, with deepfakes contributing significantly.
Globally, India’s challenges mirror those of other jurisdictions. The EU’s AI Act (2024) classifies deepfakes as high-risk AI applications, mandating transparency and risk assessments. China’s 2022 regulations require labeling of synthetic media, while Australia’s Online Safety Act 2021 targets harmful content, including deepfakes. India’s proposed DIA aims to adopt similar principles, but delays in its enactment highlight legislative inertia.

Abstract


The advent of deepfakes and AI-generated misinformation, powered by sophisticated generative artificial intelligence (AI) technologies such as Generative Adversarial Networks (GANs), has introduced unprecedented challenges to India’s legal, social, and democratic frameworks. Deepfakes—hyper-realistic manipulated audio, video, or images—have been weaponized to perpetrate fraud, defamation, non-consensual pornography, and election-related misinformation, threatening individual privacy, public trust, and democratic integrity. India’s legal system, primarily governed by the Information Technology (IT) Act, 2000, the Indian Penal Code (IPC), and the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (IT Rules), provides limited recourse but lacks specificity for AI-driven harms. This article comprehensively examines the legal responses to deepfakes in India, analyzing enforcement challenges, jurisdictional complexities, and the evolving regulatory landscape
Case Laws

Anil Kapoor v. Simply Life India & Ors. (2023, Delhi High Court): The court granted an ex-parte injunction restraining the defendants from using deepfake videos to exploit Kapoor’s likeness for commercial purposes. Recognizing personality rights under tort law, the court set a precedent for protecting public figures from AI-driven misuse, emphasizing the right to publicity.
Rashmika Mandanna Deepfake Case (2023, Delhi): Four individuals were arrested under Sections 66D and 66E of the IT Act for creating and disseminating a non-consensual deepfake video of the actress. The case highlighted the applicability of cybercrime laws but exposed delays in tracing perpetrators across platforms.
Aamir Khan & Ranveer Singh Deepfake Case (2024, Mumbai): Mumbai Police registered FIRs under IPC Sections 420 (cheating), 468 (forgery), and IT Act provisions against unknown perpetrators for deepfake videos falsely depicting the actors endorsing the Congress party during the 2024 elections. The case underscored the need for faster response mechanisms.
Amit Shah Deepfake Case (2024, Delhi): Two individuals were arrested for a manipulated video of the Home Minister falsely claiming changes to reservation policies. Charged under IT Act Sections 66C and 66D, and IPC Sections 153A (promoting enmity) and 468, the case prompted the ECI to issue stricter guidelines on AI misuse.
Vineeta Singh v. Unknown (2024, Delhi): The Shark Tank India judge obtained an injunction against a deepfake video falsely endorsing a health product, invoking privacy and defamation laws. The case highlighted the growing misuse of deepfakes in commercial fraud.

Conclusion


Deepfakes and AI-generated misinformation pose a profound challenge to India’s legal system, societal harmony, and democratic processes. While the IT Act, IPC, and IT Rules provide a foundation for addressing these harms, their limitations in tackling AI-specific issues are glaring. The proposed Digital India Act, with its focus on technology-neutral regulation, offers a promising path forward but must be expedited to address the rapid evolution of AI technologies. Judicial interventions, such as those protecting personality rights, demonstrate the adaptability of existing laws, but enforcement gaps and jurisdictional complexities persist. Industry initiatives like the Deepfakes Analysis Unit and global collaboration through GPAI are critical steps, yet they must be complemented by robust public awareness campaigns and forensic capacity-building. To balance regulation with constitutional guarantees of free speech, India must adopt a harm-focused, transparent framework inspired by global models like the EU’s AI Act. Policymakers should prioritize criminalizing malicious deepfakes, strengthening intermediary accountability, and fostering international cooperation to combat cross-border threats. By integrating legal, technological, and societal measures, India can effectively mitigate the risks of deepfakes while fostering responsible AI innovation.

FAQS

What laws in India currently address deepfakes?The IT Act, 2000 (Sections 66D, 66E, 67), IPC (Sections 420, 468, 499), and IT Rules, 2021, apply to deepfake-related offenses like impersonation, privacy violations, and defamation. However, no dedicated deepfake law exists, creating enforcement gaps.

Can social media platforms be held liable for deepfakes?Platforms enjoy safe harbor under Section 79 of the IT Act but risk liability if they fail to remove deepfakes within 36 hours of reporting, as mandated by IT Rules, 2021. Inconsistent enforcement remains a challenge.

How are deepfakes affecting Indian elections?Deepfakes have been used to spread misinformation, mock candidates, and “resurrect” deceased leaders, as seen in the 2024 Lok Sabha elections. The ECI has issued guidelines, but enforcement is limited by detection challenges.

What is the proposed Digital India Act, and how will it address deepfakes?The DIA, set to replace the IT Act, aims to regulate AI-driven harms through fines, criminal penalties for malicious deepfakes, and transparency mandates. It is still under consultation as of July 2025.

How can individuals protect themselves from deepfakes in India?Individuals can report deepfakes via platforms’ grievance mechanisms, the National Cybercrime Reporting Portal, or the DAU’s WhatsApp tipline. Legal remedies include civil suits for defamation or privacy violations.

Are there legitimate uses of deepfakes in India?Yes, deepfakes used for satire, entertainment, or consensual creative purposes are legal, provided they do not violate privacy, defame, or deceive. Courts have distinguished between harmful and benign uses.

How is India addressing deepfakes internationally?India collaborates with the GPAI and studies frameworks like the EU’s AI Act to develop global standards. Cross-border enforcement remains challenging due to jurisdictional differences.

What role does public awareness play in combating deepfakes?Government campaigns by MeitY and PIB, along with media literacy initiatives, encourage citizens to verify content and report deepfakes, reducing their societal impact.

Leave a Reply

Your email address will not be published. Required fields are marked *