Deepfake Dilemma: Legal and Ethical Challenges in the Age of AI

Author: Khyathi priya Nukavarapu, a student of KL university

To the Point

The exponential rise of artificial intelligence (AI) has ushered in transformative innovations, but it has also sparked unprecedented controversies, particularly the proliferation of deepfake technologies. Deepfakes, synthetic media generated using AI that can alter faces, voices, and actions to depict false realities, have raised red flags across legal, ethical, and policy dimensions globally. The core issue is the misuse of such technology to deceive, defame, or manipulate individuals and public perception, often without adequate legal safeguards. This article delves into the latest controversy surrounding AI-generated deepfakes, examining the legal gaps, potential liabilities, judicial interventions, and regulatory efforts.

Use of Legal Jargon

The legal implications of deepfakes traverse several key doctrines, including:

  • Right to Privacy: Protected under Article 21 of the Indian Constitution and the GDPR internationally, it includes informational autonomy and bodily integrity.
  • Defamation and Libel: Involves harm to reputation through false representation.
  • Consent Doctrine: The lack of informed consent in manipulating one’s image or voice constitutes a violation.
  • IPR (Intellectual Property Rights): Deepfakes often infringe copyrights and trademarks, particularly of celebrities or brands.
  • Mens Rea and Actus Reus: These criminal law principles apply in prosecuting perpetrators of malicious deepfakes.
  • Digital Evidence Admissibility: As per the Indian Evidence Act and global counterparts, digital content requires authentication.

The Proof

Recent Incidents:

Taylor Swift AI Deepfake Controversy (2024):

In January 2024, explicit AI-generated images of pop icon Taylor Swift were circulated on X (formerly Twitter), sparking global outrage. Despite being fake, these images severely violated her privacy and reputation. Law enforcement launched investigations, and social media platforms faced pressure to implement stronger content moderation policies.

Political Manipulation in Indian Elections (2024):

Deepfake videos of political leaders like PM Narendra Modi and opposition figures surfaced during the 2024 Lok Sabha elections, misleading voters with fabricated speeches. The Election Commission of India issued takedown notices, and legal petitions followed, citing violation of electoral ethics and public mischief under Section 505 IPC.

South Korea’s Anti-Deepfake Legislation:

South Korea recently passed amendments to its Information and Communications Network Act, explicitly criminalizing the creation and distribution of sexually exploitative deepfakes, punishable by up to five years in prison.

EU AI Act (2024):

The European Union passed the world’s first AI regulatory framework. It classifies deepfakes under “high-risk AI systems” and mandates clear labeling, traceability, and accountability.

Abstract

Artificial Intelligence has revolutionized content creation, but its misuse, especially in generating deepfakes, has triggered serious legal and societal dilemmas. The lack of comprehensive regulations, difficulty in attributing liability, and the threat to privacy and reputation are central to the current controversy. This article unpacks the rise of deepfakes, their legal ramifications, ongoing case laws, and policy debates across jurisdictions, offering suggestions for a robust regulatory framework to mitigate harms without stifling innovation.

Case Laws

Justice K.S. Puttaswamy v. Union of India (2017) – India

This landmark judgment recognized the Right to Privacy as a fundamental right under Article 21. It laid the groundwork for legal remedies against non-consensual deepfakes that infringe personal autonomy and digital dignity.

Katlyn Mahoney v. Facebook Inc. (California, 2023) – USA

In this class-action suit, Facebook (Meta) was sued for failing to take down non-consensual deepfake pornography promptly. The court acknowledged platform responsibility under Section 230 of the Communications Decency Act but also emphasized the need for stricter algorithms to detect deepfakes.

India Election Commission v. Unknown (2024) – Delhi HC

A suo motu petition led to an interim order mandating platforms like YouTube and WhatsApp to detect and remove political deepfakes within 24 hours, citing public mischief and breach of electoral integrity under the Representation of the People Act, 1951.

Deeptrace v. Jane Doe (UK, 2022) – UK

This involved a celebrity who filed suit against an anonymous user who circulated her AI-manipulated images. The court granted an ex parte injunction under privacy and defamation grounds, and authorized ISP disclosure of the user’s identity.

Detailed Legal Analysis

1. Lack of Informed Consent

Most deepfakes are generated without the individual’s explicit consent. In India, although there is no specific legislation, such acts may be prosecuted under:

  1. Section 66E, IT Act (violation of privacy)
  2. Section 292 IPC (obscenity)
  3. Section 499 IPC (defamation)

The Consent Doctrine, derived from contract and tort law, underpins the necessity for obtaining voluntary, informed permission before using one’s likeness.

2. Platform Liability and Safe Harbour

Under Section 79 of the IT Act, intermediaries are given a “safe harbour” if they follow due diligence. However, the question arises whether platforms should be held liable for algorithmic inaction or delayed takedown.

The 2021 IT Rules in India increased platform accountability by introducing grievance redressal mechanisms and time-bound content removal.

In the EU, the Digital Services Act (2024) mandates large platforms to conduct risk assessments and publish transparency reports.

3. Deepfake and IP Violation

Celebrities often find their persona used without authorization, raising IP concerns. Such usage can violate:

  • Publicity rights
  • Copyrights (when deepfakes use copyrighted media)
  • Trademarks (if used deceptively in endorsements)

4. Criminal Implications

Criminal law is increasingly being invoked to curb malicious deepfakes:

Mens Rea: The intention behind creating or disseminating a harmful deepfake becomes central to prosecution.

Actus Reus: The actual act of distribution or publication is required to constitute an offence.

In India, charges under Sections 469 (forgery for harming reputation), 505 (public mischief), and 509 (outraging modesty) may apply.

5. Evidence and Forensics

Deepfakes also challenge the authenticity of digital evidence. Under the Indian Evidence Act:

Section 65B governs admissibility of electronic records.

Courts now require digital certificates and forensic verification for AI-altered media.

Courts across jurisdictions are increasingly relying on digital watermarking, blockchain logs, and AI-detection tools like Microsoft’s Video Authenticator.

Conclusion

The deepfake controversy stands at the intersection of technology, ethics, and law. While the potential for creativity, education, and accessibility remains immense, the unchecked proliferation of AI-generated falsified media poses grave risks to privacy, trust, and democracy. The global legal landscape is gradually catching up, with countries like the EU, South Korea, and the US proposing or enacting regulation, but India still lacks a comprehensive law addressing deepfakes.

The way forward includes:

  • Drafting a standalone AI law with provisions for biometric protection, consent, and algorithmic transparency.
  • Mandatory labeling and watermarking of AI-generated content.
  • Strengthening platform liability and forensic verification mechanisms.
  • Investing in public awareness and digital literacy.
  • A delicate balance must be struck—between innovation and integrity, between freedom of expression and protection from harm.

FAQs

Q1. What is a deepfake?

A deepfake is an AI-generated synthetic media that manipulates images, videos, or audio to depict false events or statements, often indistinguishable from real content.

Q2. Are deepfakes illegal in India?

Currently, India does not have a specific law on deepfakes, but various provisions under the IT Act and IPC can be invoked depending on the nature and intent.

Q3. Can I sue someone for using my face in a deepfake video?

Yes, you may sue for invasion of privacy, defamation, or emotional distress. Courts may grant injunctions or damages depending on the case.

Q4. Do social media platforms have any responsibility to remove deepfakes?

Yes. Under the IT Rules, 2021, platforms must act within 24–72 hours upon receiving a complaint, failing which they may lose safe harbour protection.

Q5. What is the EU AI Act’s stand on deepfakes?

The EU AI Act classifies deepfakes under high-risk AI systems and mandates transparency, labeling, and potential penalties for misuse.

Q6. How can we identify deepfakes?

With tools like deepfake detectors, forensic analysis, watermarking, and inconsistencies in blinking patterns or audio-visual sync, experts can often detect manipulated content.

Q7. What are some international examples of deepfake regulation?

South Korea criminalized sexually explicit deepfakes, China requires labeling of AI-generated content, and the US has introduced several state-level bans on malicious deepfake use.

Leave a Reply

Your email address will not be published. Required fields are marked *