Site icon Lawful Legal

Deepfakes, Respect, and the Law

When Pixels Turn Into Predators

Author : Harsh Tewari, Bennett University

To the Point
Amazing advancements as well as harmful distortions have resulted from the development of artificial intelligence. Deepfakes, or digitally modified audio or video that makes people seem to say or do items they never did, are one such threat. Deepfakes began as innocuous experiments but have since evolved into an advanced form of blackmail and harassment. In India, where there is a great deal of stigma associated with sexual content, victims’ social and intellectual dignity is severely damaged, and the judiciary frequently offers them nothing more than hollow assurances. Through firsthand experience, this article explores the societal, psychological, and legal ramifications of deepfakes, identifies existing legal loopholes, and makes recommendations for how India’s cyber law framework might develop to uphold human dignity in the AI era.

Using Legal Jargon

The Proof
Deepfakes ruin lives, not just pose hypothetical risks. My close friend, a college student, was the target of a deepfake sexually explicit video made last year using her pictures from Instagram. In college WhatsApp groups, it took off like wildfire. Overnight, her attendance decreased; she was followed by whispers in the hallways; and her mental stress turned into a deep depression. They dismissed it as “hard to trace” when she went to the local cyber cell. Her complaint was taken to the police, but no suspect had been ever detained and no video was quickly removed. Rather, my friend spent months in therapy, deleted her social media accounts, and dropped out of college for a semester.

Her story is not unique. According to a 2021 study released by Deeptrace (now Sensity), 99% of online deepfakes target women and 96% of them are pornographic. India has become a breeding ground for these kinds of abuses due to its large youth population and high smartphone penetration. However, there is no specific law in our system that makes the production or dissemination of deepfakes illegal.

Abstract
Three factors make the legal fight against deepfakes complicated: confidentiality, speed, and stigma.
Although Puttaswamy’s right to privacy protects people from unconsented image manipulation in theory, there are practical obstacles to its enforcement:

1. Identification of Perpetrators: Deepfakes can be made and shared without identifying the creators. The content might have irrevocably spread across several platforms by the time the police cyber cells track down IP addresses.

2. Ineffective Enforcement: Digital forensics expertise is frequently lacking among police and prosecutors. Many victims claim that because their complaints are not taken seriously, they are trivialized or put on hold. This is a terrible blow when the victim’s dignity has been left in ruins.

3. Platform Apathy: In accordance with Section 79 of the IT Act, social media intermediaries assert safe harbor. Even though the Information Technology Rules, 2021 require immediate takedown upon notification, victims frequently find this to be excruciatingly slow. When a clip is taken out of one location, it spreads to other places.

4. Slow Civil Redressal: While victims can pursue civil lawsuits for injunctions and losses, this process is expensive and time-consuming, and societal humiliation is widespread and instantaneous.

Although outraging being modest (Sections 354A–354D) and defamation (Sections 499–500) are covered by the IPC, these laws predate the digital era and fall short in addressing the particular harm brought on by manipulated synthetic media.

Global Viewpoint
Other jurisdictions have begun to take a direct approach to combating deepfakes. For example:

United States: Malicious deepfakes, particularly in elections and pornography, are illegal in a number of states (such as California and Virginia).

UK: Non-consensual deepfake pornography would be explicitly criminalized under the Online Safety Bill.

China: In 2022, China enacted regulations requiring deepfake producers to clearly label synthetic content and to hold platforms accountable for their failure to prevent detrimental misuse.

These instances demonstrate how particular laws can fill in the gaps left by more general defamation or obscenity laws.

Case Laws

Existing jurisprudence provides indirect protection, even though India doesn’t yet have a historic “deepfake case”:

1. Justice K.S. Puttaswamy v. Union of India : Held that, in accordance with Article 21, privacy is an essential component of life and liberty, including the ability to manage one’s own personal data and image.

2. Union of India v. Shreya Singhal: Reaffirmed the notion that platforms must take action after illegal content is reported, striking down Section 66A of the IT Act as ambiguous while upholding intermediaries responsibility under Section 79.

3. State of West Bengal v. Aveek Sarkar:
In order to determine whether manipulated sexually fakes are punished under Section 67 IT Act, obscenity was interpreted in accordance with current community standards.

Furthermore, victims have discovered partial remedies under the IPC’s provisions on sexual harassment and revenge porn (such as Section 354C for voyeurism).

Effects on the Mind

Deepfakes have disastrous social and psychological repercussions that go beyond legal textbooks. Victims frequently:

Avoid embarrassment by withdrawing from social circles.

Experience both professional and academic setbacks.

Develop PTSD, depression, and anxiety.

Fear victim-blaming makes you reluctant to approach the police.

Women are frequently disproportionately affected by the social penalty. The victim suffers offline humiliation while the offender enjoys digital anonymity. Deepfake abuse is not merely a technical glitch; it is a serious violation of human rights due to its gendered nature.

Suggestions for Policy

1. Particular Law: A specific law that, like the provisions pertaining to child pornography, defines the production and dissemination of deepfakes as crimes punishable by severe penalties.

2. Required Labeling: To differentiate authentic from fraudulent content, platforms should be required to a watermark or identify AI-generated content.

3. Accelerated Takedown: Enforce 24-hour takedown sunlight for deepfake complaints and reinforce intermediary obligations.

4. Improved Cyber Forensics: Make technological and training investments to swiftly track digital footprints.

5. Victim Support: For encouraging victims to come forward, free legal assistance, counseling, and identity protection are provided.

6. Public Awareness: Education initiatives about the nature of deepfakes and the available legal options can empower potential victims and lessen stigma.

Conclusion

Deepfakes reveal a serious weakness in India’s digital governance: despite having a constitutional right to dignity and confidentiality, we lack the necessary safeguards against modern-day digital harassment. Due to the legal void surrounding deepfakes, thousands of people—particularly women—are now vulnerable to extortion and cyberbullying.

Technology must keep up with the law. Until then, each deepfake victim’s pursuit of justice will continue to exist like an air-filled jar, visible but empty.

FAQs

Q1. Is creating deep fakes in India punishable by jail time?
Although there isn’t a specific “deepfake offence” at the moment, creators may face charges under the laws against sexual harassment, defamation, and obscenity (Section 67 IT Act, Section 499 IPC). The type and intent of the content determine the punishment.

Q2. What should I do if my deepfake is shared?
In accordance with the IT Rules 2021, file a formal complaint via your local the internet cell, gather screenshots and links, ask the platform to be taken down, and get legal counsel. The likelihood of removal increases with prompt action.

Q3. Are deepfakes the fault of social media companies?
Platforms are required by the IT Act to take “due diligence” and eliminate unlawful content after being made aware of it. They lose their safe harbor protections in Section 79 if they don’t comply.

Algorithms may produce deepfakes, but they ruin reputations and actual lives. India must invest in technological justice as it moves closer to becoming a “Digital India,” where people’s dignity cannot be undermined by improper use of technology.

Exit mobile version