Author: Bhavika Dhuria, Chandigarh University
Linkedin Profile : https://www.linkedin.com/in/bhavika-dhuria-6b20b7340?utm_source=share&utm_campaign=share_via&utm_content=profile&utm_medium=ios_app
TO THE POINT:
Currently, society is rapidly advancing digitally, and things have gotten so fast that paradoxically, the line that would have distinguished a fake from a real thing has become almost nonexistent. Originally, a deepfake is made to be a sort of video, image, or audio fabricated artificially that expresses with forceful intent the voice of the person. Imagine watching a video of a public figure saying something shocking, only to find out it never happened. That’s the power—and the danger—of deepfakes. Created using advanced artificial intelligence (AI) and machine learning (ML) techniques, especially deep learning networks like GANs (Generative Adversarial Networks), deepfakes can mimic people with an alarming degree of accuracy.
While this technology started with benign or entertaining uses—like placing an actor’s face in a historical movie—it has evolved into something far more serious. Deepfakes today are being used to create misleading political messages, fake pornography, fraudulent financial content, and even defamatory personal videos. In a country like India, where digital literacy is uneven and misinformation spreads rapidly, the unchecked use of deepfakes poses a massive threat to individual dignity, public trust, and democratic integrity.
The biggest concern is the absence of a legal framework tailored to this challenge. India’s existing laws—such as provisions in the Indian Penal Code (IPC) and the Information Technology (IT) Act—were drafted long before such technology existed. As a result, while some deepfake-related acts may be punishable under general categories like defamation, cyberstalking, or cheating, these laws don’t directly address the creation, distribution, or impact of synthetic media.
Moreover, deepfakes don’t just harm reputations; they violate the right to privacy, often without consent, making them a constitutional concern under Article 21, which guarantees the right to life and personal liberty. The absence of consent, manipulation of identity, and loss of control over one’s digital self—all highlight the urgent need for deepfake-specific legislation.
This isn’t just about regulating technology—it’s about protecting real people from digital harm. A strong legal framework would act as a safeguard, ensuring that while innovation is encouraged, it does not come at the cost of truth, trust, and human dignity.
As deepfakes grow more sophisticated, so must our response—grounded in law, ethics, and empathy.
ABSTRACT:
In an age where seeing is no longer believing, deepfakes have come out as a heavy digital risk. These AI-generated synthetic media convincingly imitate a person’s face, voice, or expressions, making it almost impossible for the average viewer to detect manipulation. What was once an experimental tool in technology labs is now a tool misused for political propaganda, character assassination, cyber harassment, and fake news dissemination.
India, with its rapidly expanding internet base and limited digital literacy, is particularly vulnerable. Despite the risks, there is no standalone legislation in India to specifically address deepfakes. Instead, victims and authorities must rely on fragmented provisions under the Indian Penal Code, Information Technology Act, and general constitutional protections, which often fall short in addressing the unique challenges posed by synthetic content.
This article delves into the growing menace of deepfakes, the gaps in the Indian legal framework, and the urgent need for comprehensive regulation. It also draws on relevant case laws, constitutional rights, and comparative insights from global jurisdictions. Ultimately, it argues for a thoughtful legal response that balances technological innovation with the protection of individual rights, truth, and public trust in the digital age.
USE OF LEGAL JARGON:
To understand the legal implications of deepfakes, it is essential to navigate the complex terrain of legal terminology—what we commonly call legal jargon. While this may sound intimidating, these terms help pinpoint the nature of offences and the scope of accountability under law.
Deepfakes, though technologically driven, intersect deeply with concepts like mens rea (criminal intent) and actus reus(the actual act of committing a crime). For instance, when someone knowingly creates or shares a deepfake to defame or harass another person, it reflects both a guilty intention and a culpable action—fulfilling the basic elements of a crime.
In the absence of a lex specialis—a specific law designed to address synthetic media—India is forced to interpret existing statutes to regulate deepfakes. However, applying laws such as the Indian Penal Code (IPC) or the Information Technology Act (IT Act) can be problematic because these were not conceived with AI-based manipulation in mind. Thus, proving vicarious liability (holding a person legally responsible for the actions of another, such as a platform hosting deepfakes) becomes challenging.
Additionally, deepfakes frequently breaks upon a person’s right to privacy, now known as a fundamental right under Article 21 of the Indian Constitution, especially following the landmark judgment in Justice K.S. Puttaswamy v. Union of India (2017). They also originate questions of freedom of speech versus reasonable restrictions, a balance protected by Article 19(2).
In legal terms, deepfakes challenge how we define identity, intent, and harm. Yet, for the person whose face or voice has been misused, the issue is deeply personal and traumatic. Hence, while legal jargon frames the debate in precise terms, the human impact remains at the heart of why we urgently need reform.
THE PROOF:
The rise of deepfakes is no longer a speculative fear—it is a reality backed by disturbing real-world incidents. The harm caused by deepfakes is tangible, affecting individual dignity, public discourse, and democratic trust. In India and beyond, there are multiple documented cases that expose the urgency of the issue.
One of the earliest high-profile examples in India was during the 2020 Delhi Assembly elections, when a manipulated video of Delhi Chief Minister Arvind Kejriwal appeared online. In it, his speech was altered to appear as though he was addressing voters in Haryanvi—something he never did. While seemingly innocuous, the deepfake had a political motive: to mislead regional voters and manipulate public opinion. It demonstrated how convincingly fake content could be used to subvert electoral processes.
Another shocking instance occurred in 2023, when a deepfake video of actress Rashmika Mandanna went viral. The manipulated video placed her face on another individual’s body in an explicit context, sparking national outrage and concern. What’s particularly alarming is how easily such content was circulated, with little recourse for the victim and no immediate accountability for the creators.
Globally, a 2019 study by Deep trace revealed that 96% of all deepfakes online involved non-consensual pornography, mostly targeting women. These figures illustrate how deepfakes are disproportionately used to violate privacy, undermine consent, and harass individuals, especially women in the public eye.
In a society as digitally connected yet digitally unaware as India, where misinformation spreads rapidly through social media and encrypted platforms, the implications of such manipulation are far-reaching. These examples serve as undeniable proof that deepfakes are not just technological mischief—they are weapons of misinformation, and they are already harming real people.
The need for urgent legal intervention could not be clearer.
CASE LAWS:
Although India lacks a specific law addressing deepfakes, several judicial pronouncements provide guiding principles that can be applied to this emerging threat. A key precedent is the landmark case of Justice K.S. Puttaswamy v. Union of India (2017), where the Supreme Court recognised the Right to Privacy as a fundamental right under Article 21. Deepfakes often violate this right by digitally misrepresenting individuals without consent, especially in cases involving intimate content or political impersonation.
In Shreya Singhal v. Union of India (2015), the Supreme Court struck down Section 66A of the IT Act but upheld the necessity of placing reasonable restrictions on free speech under Article 19(2). This decision is crucial in understanding that while freedom of expression is protected, it cannot be extended to content that causes harm, defames, or incites violence—roles deepfakes often play.
Additionally, Ritesh Sinha v. State of Uttar Pradesh (2019) discussed the admissibility of biometric samples, hinting at how courts may handle AI-generated evidence in the future.
While none of these cases directly deal with deepfakes, they form the jurisprudential backbone for crafting future legal responses—bridging the gap between existing rights and emerging harms in a rapidly evolving digital world.
CONCLUSION:
As we stand at the crossroads of technological advancement and legal evolution, the rise of deepfakes presents one of the most pressing challenges of our time. These AI-driven manipulations do more than just blur the lines between truth and fiction—they have the power to distort reality, damage reputations, infringe upon privacy, and threaten democratic processes. In India, where the digital footprint is expanding rapidly and legal infrastructure is still catching up, the absence of deepfake-specific laws leaves citizens vulnerable and justice delayed.
While existing laws like the Indian Penal Code, Information Technology Act, and constitutional protections under Articles 19 and 21 offer some legal recourse, they are fragmented and insufficient to deal with the sophisticated nature of deepfake technology. Victims often find themselves trapped in a web of slow legal procedures, jurisdictional ambiguity, and emotional trauma—without clear remedies or timely justice.
India needs to act swiftly and decisively. A dedicated legal framework must be introduced that defines, prohibits, and penalises malicious deepfakes while also setting clear standards for consent, content verification, and platform responsibility. Legal reform should be supported by digital literacy campaigns and AI-based detection tools to empower citizens and law enforcement alike.
In protecting ourselves from deepfakes, we are not merely defending data—we are safeguarding human dignity, truth, and trust in the digital age.
This quote encapsulates the spirit with which India must move forward: embracing innovation while reinforcing ethical and legal boundaries. A future built on trust and truth demands nothing less.
FAQS
Q1. What exactly is a deepfake?
A deepfake is a digitally altered image, video, or audio clip where someone’s face or voice is manipulated to say or do something they never really did. It’s created using artificial intelligence and machine learning.
Q2. Are deepfakes illegal in India?
Right now, there’s no specific law that directly targets deepfakes. But some existing laws like the IPC and IT Act can be applied, especially if the deepfake is used to harass, defame, or impersonate someone.
Q3. Can someone go to jail for making a deepfake?
Yes—if the deepfake causes harm, is obscene, or used for cheating or defamation, the person responsible can face criminal charges under various laws.
Q4. Why is there a need for a new law?
Because current laws weren’t designed with AI in mind. Deepfakes are a new kind of problem, and our legal system needs new tools to deal with them effectively and fairly.
Q5. How can we protect ourselves from deepfakes?
Stay alert online. Verify content before sharing. If you’re a victim, report it to cybercrime authorities and consult a lawyer. Educating yourself is the first line of defence.
