Author: Vrinda Bhardwaj, O.P Jindal Global University
Abstract
Deepfakes are like some almost too real videos and audio clips fabricated of people doing or saying things they never even actually did. It is powered by artificial intelligence and this technology is rapidly becoming a tool for misinformation, exploitation, and digital deceit in today’s world. In India, where legal systems are still adapting to the digital age, deepfakes present a serious challenge to privacy, consent, and even democracy. This article breaks down what deepfakes are, how they affect us, and what our legal system is and is not doing about them.
To the Point
Imagine receiving a video of a political leader making a controversial statement or worse, seeing yourself in a compromising situation in a video you know you never filmed. However, due to deepfakes, it is actually happening. These hyper-realistic media creations are often made with the help of AI tools like Generative Adversarial Networks (GANs) and can seamlessly mimic real people.
In India and around the world, deepfakes have already caused significant damage:
Women have been targeted with fake explicit videos
Political messages have been manipulated to influence voters
Fraudsters have used voices to scam people into transferring money to them
And with deepfake apps becoming easier to access, the threat is no longer science fiction but has started happening in reality.
Legal Jargon
Right to Privacy: A fundamental right under Article 21 of the Indian Constitution, encompassing informational, bodily, and decisional privacy.
Consent: Voluntary agreement to the use or dissemination of personal data or likeness; its absence renders such acts unlawful.
Defamation: Injury to an individual’s reputation through false and malicious statements, either spoken (slander) or written (libel).
Digital Impersonation: The fraudulent representation of another person’s identity through digital or electronic means.
Electronic Evidence: Any digital content (videos, emails, images) admissible under the Indian Evidence Act, 1872, subject to compliance with procedural requirements like Section 65B certification.
The Proof
A 2023 study by Deeptrace Labs found over 145,000 deepfake videos online. Which has really proliferated over time.
96% of these videos were sexually explicit, mostly targeting women without their consent.
In a serious case from the UK, a CEO transferred €220,000 after a phone call from someone he thought was his boss. However, the voice was not his boss’ but it was AI-generated.
In India, we saw the first political use of deepfakes during the 2020 Delhi elections, when BJP leader Manoj Tiwari appeared in a video speaking fluent Haryanvi which was a language he does not even actually speak. Although the party admitted the video was AI-generated and done for campaign outreach but it raised critical questions about transparency, truth, and manipulation.
Indian Law
India does not yet have a specific law about deepfakes yet. But victims can still take action under several existing laws:
Information Technology (IT) Act, 2000:
Section 66D: Punishes impersonation using electronic means.
Section 66E: Penalizes sharing private images without consent.
Sections 67 and 67A: publishing of obscene or explicit content.
Indian Penal Code (IPC), 1860:
Section 499: Covers defamation.
Section 354C: Targets voyeurism, especially against women.
Sections 468 & 469: Relate to forgery and damaging reputations.
Indian Evidence Act, 1872:
Section 65B: Governs how digital files can be used as evidence.
Constitution of India:
Article 21: Your right to life includes your right to privacy.
Case Laws
Justice K.S. Puttaswamy v. Union of India (2017):
This landmark judgment declared that privacy is a fundamental right under Article 21 of the Constitution. It acknowledged that people have control over their personal data, including how their image or likeness is used.
Relevance: Deepfakes often involve someone’s face or voice being used without their consent. This is a direct violation of the right to privacy as upheld in this case. The judgment laid the foundation for treating personal data with respect, which is essential in the context of AI-manipulated content.
Shreya Singhal v. Union of India (2015):
The court struck down Section 66A of the IT Act, which was vague and led to misuse. The decision emphasized the importance of protecting free speech while also recognizing that harmful speech, such as defamation or threats, is not protected.
Relevance: Deepfakes that spread misinformation or defame someone can be penalized under other provisions, but this case helps balance the line between freedom of expression and protection from harm.
State of West Bengal v. Animesh Boxi (2018):
In this case, a man leaked private videos of a woman online. The court found him guilty under sections of the IT Act for privacy invasion and publishing obscene material.
Relevance: Though the case involved real videos, the principles can easily apply to deepfake pornography. The psychological damage and invasion of privacy are just as severe, even if the video is fake. This precedent supports punishing those who share or create such content.
Khushboo v. Kanniammal (2010):
Actress Khushboo was targeted with numerous legal complaints for simply stating her opinion on pre-marital sex. The Supreme Court dismissed these cases, emphasizing that personal views, however unpopular, are constitutionally protected.
Relevance: If a deepfake falsely portrays someone making controversial statements, the fallout could be severe. This case highlights the dangers of falsely attributing statements to individuals and the legal protections that should apply.
Manoj Tiwari Deepfake Video (2020):
During the Delhi assembly elections, BJP leader Manoj Tiwari was seen speaking in Haryanvi in an AI-generated video. While the intent wasn’t malicious, the incident sparked national debate.
Relevance: This example showed how easily deepfakes can be used in political messaging, potentially misleading voters. Although it didn’t lead to legal proceedings, it underlined the urgent need for regulation.
Human problem
The impact of deepfakes is not just limited to abstract concerns about technology but it directly affects people’s lives. From false videos used in political campaigns to manipulated images used for blackmail or harassment, the consequences are real and immediate. It is not just celebrities or politicians at risk. Ordinary individuals can also be targeted, often without any way to fight back. When fake content spreads faster than the truth, trust becomes a casualty and the line between what is real and fake starts to blur for everyone. There are real people at stake who actually suffer badly due to their fake videos and their image is also ruined in society, therefore, it is not merely a technical issue but it is a lot problematic than that.
Conclusion
Deepfakes are no longer just a technical problem but they are a human problem. They involve our identity, our trust in information and our ability to consent. Whether it is a fake video used to ruin someone’s reputation or an AI-generated voice scamming people out of their money, the damage is very real in the world.
Some things which can be done to help fix it can be:
Draft new laws: India urgently needs legislation that clearly defines deepfakes and punishes people who hide behind these scams and consequently to prevent the misuse of technology.
Educate the public: People need to learn how to spot deepfakes and protect themselves online but India has a lot of population and people can be misled easily
Hold tech platforms accountable: Social media sites should flag or take down deepfake content quickly.
Build ethical tech: Developers should build consent and traceability into the design of AI tools.
Go global: Deepfakes cross borders, so India must work with other countries on rules and enforcement.
The Digital Personal Data Protection Act, 2023, and the upcoming Digital India Act may offer some protections, but they’re just a starting point. A more complete legal and regulatory framework is needed but one that recognises the fast-moving nature of the technology and the very real impact it has on people’s lives. Policymakers must move fast, because the problem is no longer on the horizon. It’s already here.
FAQS
Q1: Is making a deepfake always illegal?
No. Deepfakes used for movies, entertainment, or education with clear consent aren’t illegal. But if they harm someone or spread false information, they can be criminal offenses.
Q2: What should I do if I find a deepfake of myself online?
Report it to the Cyber Crime Portal (cybercrime.gov.in) or your nearest police station. Save the content as evidence, and consider filing a legal complaint under the IT Act or IPC.
Q3: Can social media platforms be forced to take down deepfakes?
Yes. Under India’s IT Rules 2021, platforms must remove harmful content within 24 hours of receiving a complaint.
Q4: Are there any global laws on deepfakes?
Yes. Some U.S. states ban deepfakes in election campaigning ads. China now requires AI content to be clearly labeled. The European Union’s AI Act also addresses transparency and ethical use.
Q5: Will technology be able to detect deepfakes?
Yes, but it’s a race. As detection tools improve, so do deepfake generators. That’s why legal and ethical frameworks must evolve alongside tech.
