Author: Manisha. K, Christ Academy Institute of Law
To the Point
Deepfake technology, which utilizes artificial intelligence to create hyper-realistic but fabricated audio-visual content, has become an alarming concern in India’s legal and ethical landscape. While originally used for entertainment and satire, its misuse has expanded into malicious territories, including misinformation, political manipulation, identity theft, cyberstalking, pornography, and fraud. Deepfakes now threaten not only personal privacy and reputation but also democratic institutions and judicial processes.
In India, where digital penetration is vast but digital literacy is often lacking, deepfakes can cause irreparable harm. A manipulated video of a public figure can incite communal tensions or swing electoral results. Similarly, deepfake pornography, especially targeting women, has emerged as a disturbing trend, violating dignity and consent. Victims find themselves without clear legal recourse, given the novelty of the technology and the lack of express provisions in Indian statutes.
Although India does not yet have a dedicated law to tackle deepfakes, various provisions under the Indian Penal Code (IPC), Information Technology Act, 2000 (IT Act), and the recently notified Bhartiya Nyaya Sanhita (BNS), 2023, can be invoked to address certain manifestations of deepfakes. Sections dealing with defamation, forgery, obscenity, and identity theft are commonly used, albeit with limited effectiveness.
Further complicating the issue are ethical dilemmas. Deepfakes blur the line between truth and fabrication, challenging the fundamental right to information. In an age where evidence, especially visual, has traditionally been a benchmark of truth, the ability to fabricate realistic videos undermines the credibility of genuine content. This poses a serious threat to journalism, courtroom evidence, and public discourse.
The ethical implications also extend to informed consent and autonomy. When an individual’s likeness or voice is cloned without permission, particularly in pornographic or defamatory content it infringes upon their right to dignity and personal liberty under Article 21 of the Indian Constitution. The victims not only suffer emotional trauma but are also left vulnerable to public ridicule and social stigma.
Addressing deepfake-related harms calls for a multi-pronged approach stronger legislation, AI detection tools, platform accountability, and public awareness. While the IT Rules, 2021, mandate intermediary due diligence, they are insufficient to deal with the sophistication and viral nature of deepfakes. A comprehensive legal framework that clearly defines and criminalizes deepfakes, coupled with privacy-focused data protection laws, is urgently needed.
Use of Legal Jargon
Deepfake technology intersects with multiple legal doctrines, and its analysis invokes critical jurisprudential concepts including mens rea, actus reus, and res ipsa loquitur especially in the context of cybercrimes. At its core, the unauthorized creation or dissemination of deepfake content often constitutes a prima facie violation of one’s right to privacy and dignity, protected as a fundamental right under Article 21 of the Constitution of India, as interpreted in Justice K.S. Puttaswamy v. Union of India (2017).
Under the Information Technology Act, 2000, deepfakes may attract liability under Section 66C (identity theft), Section 66D (cheating by personation using computer resources), and Section 67 (publishing or transmitting obscene material in electronic form). These provisions establish the actus reus through unauthorized use of digital images or voice synthesis, and mens rea through the malicious intent to deceive, harass, or defame.
From a tort law perspective, deepfakes can be seen as instances of defamation per se, where falsified digital content injures reputation without requiring proof of actual damage. The legal principle of falsus in uno, falsus in omnibus becomes increasingly relevant when adjudicating the evidentiary value of video or audio evidence that may have been manipulated. The integrity of digital evidence is thus jeopardized, demanding stricter evidentiary scrutiny under the Indian Evidence Act, 1872, particularly Sections 65A and 65B dealing with electronic records.
Moreover, the concept of vicarious liability may be applied to digital intermediaries and platforms that host or disseminate deepfake content, especially in light of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, which impose a duty of due diligence on intermediaries. However, the safe harbour provision under Section 79 of the IT Act shields these entities from liability unless they fail to act upon receiving actual knowledge or court orders.
The recent enactment of the Bharatiya Nyaya Sanhita (BNS), 2023 aims to modernize criminal jurisprudence. Though not deepfake-specific, certain sections on cyber harassment, defamation, and technological misuse could be interpreted to encompass deepfake-related offences.
In essence, the legal terrain surrounding deepfakes is still evolving, and the absence of sui generis legislation leaves victims to rely on a patchwork of legal doctrines and penal provisions. The jurisprudence must be expanded to comprehensively criminalize deepfakes as unique digital harms rooted in deception, coercion, and the erosion of consent.
The Proof
The threat posed by deepfakes is no longer hypothetical; it is evidenced by a series of real-life incidents that have caused measurable harm to individuals and democratic institutions in India. A recent and widely publicized instance occurred in 2024 when a doctored video of an Indian actress was circulated online, depicting her in a compromising position. Despite the video being proven fake through forensic analysis, the damage to her reputation and mental well-being was severe and irreversible. She faced relentless online harassment, and no concrete legal recourse could deliver swift justice.
Similarly, during the 2023 Karnataka Assembly elections, a deepfake video of a senior political leader making communal remarks surfaced and spread across social media platforms. Though it was debunked eventually, the video had already influenced voter sentiment in certain constituencies, illustrating how deepfakes can compromise the integrity of electoral processes and disrupt public order.
In the realm of corporate fraud, deepfake audio was used in a 2022 cyberattack on an Indian multinational firm. An employee received a call with a voice that mimicked the CEO, instructing an urgent transfer of funds, leading to a financial loss of over ₹12 crore. The impersonation, enabled by deepfake audio, demonstrated how AI-generated fraud is breaching even tightly secured financial systems.
These cases exemplify that deepfakes are not mere digital pranks but tools of defamation, political sabotage, fraud, and coercion. Despite invoking sections of the IT Act and IPC, the existing legislative framework struggles with enforcement due to limitations in jurisdiction, the absence of specific deepfake definitions, and the technical sophistication required for forensic verification.
Thus, the evidence points not only to the proliferation of deepfake technology but to the urgent need for India to bridge the legislative and enforcement gap to protect individual rights, institutional credibility, and societal trust.
Abstract
Deepfake technology, powered by artificial intelligence and machine learning algorithms, enables the creation of hyper-realistic but entirely fabricated images, videos, and audio recordings. While the technology has legitimate applications in entertainment, education, and accessibility, its misuse poses a severe threat to individual privacy, democratic integrity, and societal trust. In India, the proliferation of deepfakes has exposed glaring gaps in the legal framework, where victims of defamation, sexual exploitation, and identity theft often find little to no remedy due to the absence of a specific statute addressing the misuse of synthetic media.
This article explores the growing legal and ethical concerns surrounding deepfakes in the Indian context. It highlights the potential for harm, particularly for women, public figures, and institutions, arising from the unauthorised manipulation of biometric likenesses. Existing provisions under the Information Technology Act, 2000, Indian Penal Code, and recently introduced Bharatiya Nyaya Sanhita (BNS), 2023, offer limited recourse by relying on general offences such as defamation, obscenity, cheating, and cyber harassment. However, these laws do not directly address the unique challenges posed by deepfakes, including the complex questions of consent, evidentiary reliability, and platform liability.
The article further delves into ethical concerns, such as the violation of informed consent, the erosion of truth in media and law, and the mental trauma inflicted upon victims of non-consensual synthetic content. It makes a case for urgent legal reform to criminalise the malicious use of deepfake technology, create accountability mechanisms for tech platforms, and establish forensic protocols for authenticating digital media.
Case law
1. Justice K.S. Puttaswamy (Retd.) v. Union of India (2017) 10 SCC 1
Held: The Supreme Court declared the right to privacy as a fundamental right under Article 21 of the Constitution.
Relevance: Deepfakes that misuse a person’s image, voice, or likeness without consent amount to a direct infringement on their privacy. This case forms the constitutional bedrock for any future litigation involving deepfakes.
2. Shreya Singhal v. Union of India (2015) 5 SCC 1
Held: Section 66A of the IT Act was struck down as unconstitutional for being vague and arbitrary. However, the judgment upheld the importance of intermediary liability under Section 79 and outlined the obligations of platforms in removing unlawful content when notified.
Relevance: This case is crucial in determining the responsibility of tech platforms and social media companies in taking down deepfake content after receiving actual knowledge.
3. Khushboo v. Kanniammal & Anr. (2010) 5 SCC 600
Held: The Court emphasized freedom of speech and discouraged misuse of criminal defamation laws to suppress dissent or personal opinions.
Relevance: This case highlights the delicate balance courts must strike between regulating harmful deepfakes and protecting free expression.
4. Re: Prajwala Letter Case (2018)
Held: The Supreme Court issued directives for the proactive removal of sexually explicit content online and emphasized the need for automated content monitoring technologies by platforms.
Relevance: It anticipates the deepfake pornographic menace and supports proactive technological interventions.
5. Bharatiya Nyaya Sanhita, 2023 (BNS) – Sections 73 & 74
Provision: These new sections relate to cyber harassment, defamation, and sexual exploitation using digital means.
Relevance: Although not judicial precedents yet, these sections offer statutory tools that may be invoked in future deepfake litigation.
Conclusion
Deepfake technology, though born from the advances of artificial intelligence and digital innovation, has swiftly turned into a double-edged sword, empowering creative expression on one hand while enabling malicious intent on the other. In India, where digital reach has outpaced digital literacy, the misuse of deepfakes presents a formidable challenge to individual rights, democratic values, and legal systems. The manipulation of images, voices, and identities not only infringes upon privacy but also violates the principles of consent, autonomy, and human dignity protected under Article 21 of the Constitution.
The existing legal framework, comprising the Information Technology Act, 2000, Indian Penal Code, and now the Bharatiya Nyaya Sanhita, 2023, offers partial safeguards by criminalising acts like cyberstalking, identity theft, and defamation. However, these laws lack the specificity required to address the unique technical and evidentiary nuances of deepfakes. Additionally, the absence of a robust data protection regime and digital forensic infrastructure further weakens the state’s ability to prosecute deepfake-related offences.
Ethically, the unregulated use of deepfakes undermines trust in visual evidence, distorts the concept of truth, and endangers victims, particularly women, who are disproportionately targeted through non-consensual explicit content. The risks also extend to institutions, elections, and judicial proceedings, making deepfakes not just a personal or private issue but a public and national concern.
The way forward must be a combination of legislative reform, technological innovation, and public education. India needs a dedicated legal framework to define and criminalise deepfakes, establish platform accountability, and protect digital rights. Simultaneously, public awareness campaigns must inform citizens about the dangers of synthetic media and their legal rights. Regulatory frameworks like a Digital India Act or a revived Personal Data Protection Bill must incorporate safeguards against deepfake misuse.
FAQS
1. What is a deepfake and why is it legally concerning?
A deepfake is a digitally altered video, image, or audio that realistically mimics a person’s likeness or voice. It is legally concerning because it can be used to defame, impersonate, or harass individuals, infringing on rights to privacy and consent.
2. Is there any specific law in India that criminalizes deepfakes?
No. India currently lacks a specific statute targeting deepfakes. However, related offences can be prosecuted under the IT Act, IPC, and the Bharatiya Nyaya Sanhita, 2023.
3. Can victims of deepfake pornography seek legal redress?
Yes, victims can file complaints under sections dealing with cyberstalking, defamation, obscenity, and identity theft. However, legal remedies are often slow and inadequate due to the lack of deepfake-specific laws.
4. Are social media platforms liable for hosting deepfake content?
Platforms have a duty to act upon receiving actual knowledge of unlawful content under IT Rules, 2021. However, Section 79 of the IT Act provides them “safe harbour” if they act swiftly to remove such content.
5. What reforms are needed to regulate deepfake technology in India?
India needs a comprehensive legal framework defining deepfakes, stricter intermediary obligations, stronger digital forensics infrastructure, and enhanced user awareness to combat this emerging threat effectively.