Site icon Lawful Legal

DEEPFAKES AND THE LAW: THE NEED FOR A ROBUST LEGAL FRAMEWORK

Author: Sakshi Rana


Abstract
Deepfakes — AI- deduced synthetic media that alter or fabricate audio, videotape, or images with astounding literalism have exploded in frequence and complication since 2017, hanging particular sequestration, reputational rights, republic, and public security. Being legal and nonsupervisory approaches are fractured and frequently infelicitous for the unique challenges posed by deepfake technology. This composition examines the detailed legal issues deepfakes present, analyses authoritative case laws and legislative action encyclopaedically and in India, and suggests how a robust legal frame can address these pitfalls while esteeming abecedarian rights.
1. Preface
Deepfakes influence deep literacy (specially, generative inimical networks or GANs) to fabricate or manipulate media, making it decreasingly delicate to distinguish verity from falsification. While they enable creative expression and entertainment, their vicious use fosters vilification, intimation, non-consensual pornography, fraud, and political manipulation. The growing availability of free deepfake tools has made defensive legal reforms a global imperative.


2. Legal Challenges of Deepfakes
2.1 Evidentiary Challenges
Traditional evidentiary doctrines, like the stylish substantiation rule and presumption of authenticity, are undermined by the swell of deepfakes. Courts face new problems regarding the admissibility and trustability of digital substantiation. For example, Rule 901 of the US Federal Rules of proof requires an exponent to verify substantiation, a task which skyrockets during an era of smoothly forged media. Indian courts conforming Section 65B of the Indian substantiation Act, 1872 must now use digital forensics to corroborate authenticity.
2.2 Privacy and Consent
Deepfake use of someone’s image without consent is a serious sequestration breach. India’s Composition 21 (right to sequestration) and the EU GDPR’s protection over” biometric data” enjoin similar abuse. The Indian Supreme Court in Justice K.S. Puttaswamy v. Union of India (2017) marks sequestration as an indigenous right, offering implicit expedient. still, the global, viral spread of deepfake media frequently outpaces legal remedies.
2.3 Defamation, Reputation, and Public Order:
deepfakes weaponize vilification by spreading false, reputationally damaging content; still, tracing liability is complex due to obscurity and the specialized know- style needed for criterion. Under Section 499 of India’s IPC, and analogous vittles’ worldwide, deepfakes can invite civil and felonious consequences. Political deepfakes, as seen in doctored election crusade vids, risk public order and popular processes.

2.4 Intellectual Property and Publicity Rights
Deepfakes interlace the right of hype (specially under Zacchini v. Scripps- Howard Broadcasting Co., US Supreme Court 1977), brand violation, and trademark/ false countersign claims. Debates loiter over AI- generated content power some authorities, feting only mortal authorship, leave deepfake workshop outside brand protection; others concentrate on “obsession” and creative input as determinants.
2.5 Fraud and Criminal Use
Deepfakes are decreasingly exploited for fiscal fraud and impersonation. In United States v. Hussain
  (N.D. Cal., 2019), deepfake audio was used to impersonate a company director and authorize a fraudulent finances transfer, heralding a new period of synthetic identity crime.
2.6 Non-consensual Pornography and Unsexed Harms
Encyclopaedically, deepfake pornography is widespread, with 96 of deepfake content approximated as pornographic. Victims, substantially women, suffer unrecoverable social and cerebral consequences.
2.7 National Security and Disinformation
Deepfakes hang not only individualities but countries, using strategic intimation (e.g., fake speeches from leaders) to erode public trust and potentially incite violence. The US Department of Homeland Security and European cyber agencies classify deepfake- driven misinformation as a major trouble vector.


3. Attestations, Data, and Authorities
• Deep trace Report (2019) 14,678 deepfake vids online by 2019, 96non-consensual pornography.
• Pelosi Slurred Speech Deepfake (2019) Viral spread of a deepfake videotape created real- world confusion and polarization.
• WIPO’s Issues Paper on IP & AI Questions brand eligibility of deepfakes and scores toward the depicted person.
• Justice K.S. Puttaswamy v. Union of India (2017) Established sequestration as a core indigenous right (India).
• Online Safety Act 2023 (UK) Mandates nippy takedown and criminalizes non-consensual deepfake sharing.
• DEEPFAKES Responsibility Act (US, 2023) Proposes felonious penalties and labelling conditions for “vicious deepfakes”.


4. Case Laws Detailed Analysis
4.1 Justice K.S. Puttaswamy (Retd.) & Anr. v. Union of India & Ors. (2017, India, Supreme Court)
facts & Ruling
A nine-judge native Bench was tasked with determining if the right of sequestration was a “abecedarian right” under the Indian Constitution
cases in order to celebrate deepfake audio as a tool of high-value identity theft and line fraud.
Significance for Deepfakes
This precedent now empowers individualities to challenge unauthorized creation and dispersion of deepfakes as an eruption of sequestration, supporting both civil claims for damages, and writ desires for takedowns.
4.2 Zacchini v. Scripps- Howard Broadcasting Co. (1977, US Supreme Court)
facts
Zacchini, a “mortal load” pantomime, sued a broadcaster for raising his entire performance without concurrence, professing violation of his right of hype.
Supreme Court Decision
The operation of the right of hype prevailed; the United States Supreme Court honoured a pantomime’s exclusive right to control the marketable exploitation of their identity, indeed against First Amendment/ newsgathering claims.
Deepfake Application
thieving the digital likeness/ voice of a person in a “deepfake” for marketable purposes is easily practicable under Zacchini.
4.3 Doe v. Gangland products, Inc. (2013, US 9th Circuit Court of prayers)
facts
A former gang member’s identity was deficiently blurred in a televised talkie, exposing him to harm. The complainant sued for violation of sequestration.
Holding
Court set up the defendant liable, chancing that proper anonymization and concurrence are needed when distributing potentially dangerous media.
Applicability
Raises the standard for concurrence and anonymization demanded when re-creating or conforming real individualities in digital and AI- generated workshop, including deepfakes.
4.4 United States v. Hussain (N.D. Cal., 2019)
facts
Cybercriminals used deepfake audio to convincingly impersonate the voice of a company superintendent, authorizing a fraudulent multi-million bone line transfer.
Court Proceedings
This was one of the first U.S. cases to fete deepfake audio as an instrument of high- value identity theft and line fraud.
Legal Development
urged U.S. agencies and commercial actors to demand new norms for synthetic voice discovery and evidence- of- authority.

4.5 European Court Actions (GDPR enforcement, 2022-2024)
Procedures:
European courts have enjoined websites and platforms from distributing AI-edited or deepfake media that violates individual consent or “data subject” rights under the European General Data Protection Regulation.
Key Cases:
German and French courts ordered expedited takedowns and imposed “right to be forgotten” obligations over deepfake images, recognizing them as “personal data” under GDPR.
Precedent:
Establishes a direct procedural remedy for European citizens harmed by deepfakes—regardless of where servers are located.
4.6 United Kingdom: Online Safety Act 2023 and Notable Rulings
Legislation:
The Online Safety Act requires platforms to remove non-consensual deepfakes or face heavy penalties; courts gain expanded emergency injunction powers.
Case Example:
A widely reported UK case in 2024 resulted in expedited criminal prosecution and monetary penalties against an individual who distributed revenge deepfakes, with the court holding that “AI-facilitated fakes demand an enhanced standard of sentencing because they have a viral effect.”
Impact:
The UK offers a model of rapid, platform-targeted remedies and proportional sentencing for deepfake crimes.


5. Legislative and Regulatory Responses
5.1 India
Information Technology Act, 2000: Prohibits identity theft, communication of obscene material,
impersonation, and breach of privacy—arresting and prosecuting deepfake creators in severe cases.
Data Protection (DPDP) Bill/Act: Seeks to regulate consent and impose direct obligations on use of biometric and image data.
5.2 United States
Deepfakes Accountability Act (2023, proposed): Would impose criminal and civil penalties for creating or distributing malicious, undisclosed deepfakes.
NO FAKES Act/NO AI FRAUD Act: Seeks to prohibit digital impersonation for fraud or without consent, with platform liability for hosting synthetic media.
State laws: California, Texas, and Virginia target intimate and election-related deepfakes with specific
statutes.

5.3 European Union
Digital Services Act (DSA): Obligates social platforms to install mechanisms for identification, flagging, and rapid removal of deepfakes.
AI Act (2024, pending): Mandates labelling, digital watermarking, and periodic audits of synthetic content.
5.4 United Kingdom
Online Safety Act (2023): Requires takedowns, criminalizes malicious deepfake dissemination; expands platform responsibility.
5.5 South Korea, China, France, Denmark
South Korea: Imposes severe criminal penalties for non-consensual deepfake pornographic content.
China: Prohibits creation of “synthetic information” without clear digital labeling and user consent.
France/Denmark: Offer model statutory reforms for takedown, civil damages, and cross-border enforcement.


6. Key Proof Points Justifying Urgent Legal Response
Pure Quantity and Damage: Worldwide tens of thousands of deepfakes proliferate every month; 96%
projected to be pornographic, overwhelmingly victimizing women.
Impact on Democracy: Deepfakes are now a recognized vector for election interference and mass
misinformation.
Limitations of Ancient Law: Current criminal, tort, and IT legislation usually does not take into account or penalize AI-driven, cross-jurisdictional, and anonymized harms.


7. Proposals for a Robust Legal Framework
Comprehensive Statutory Definition: Clearly identify what constitutes “deepfake” and “synthetic media.”
Mandatory Consent and Notice: Clear, express permission from affected individuals in all but protected free expression instances (e.g., parody, news).
Robust Labelling Standards: Watermarks, machine-readable tags, traceable metadata.
Platform and Intermediary Liability: Conditional safe harbour—liability if platforms fail to remove, mark, or block harmful deepfakes expeditiously.
Expedited Takedown Procedures: “Notice and stay down” obligations with quick resolution timelines.
Specialist Forensic Support: Mandatory AI/forensic training for law enforcement, prosecutors, and the judiciary.
Cross-border Harmonization: Treaties for information sharing, extradition, coordinated takedowns.
Victim Compensation Mechanisms: Civil damages, public apologies, and funded mental health support programs.
Public Awareness Campaigns: Official resources on deepfake risks, remedies, and reporting.

8. Conclusion
Deepfakes represent a foundational challenge to truth, autonomy, and justice in the digital age. Legal systems must modernize rapidly, combining statutory reform, judicial innovation, technical standards, an international cooperation. If left unchecked, deepfakes could erode confidence not only in digital evidence but in the very notion of shared reality. The law must be agile, technologically-literate, and victim-centred—anchored in both deterrence and redress.


9. FAQs
Q1. Can deepfake creators be prosecuted even when acting anonymously or offshore?
In theory, yes—using cybercrime, privacy, and IP statutes. In practice, jurisdictional and technical complexity
requires international cooperation and new procedural tools.
Q2. Are social media platforms liable for hosting deepfakes?
Depends on local law; platform liability is expanding through new statutes and judicial rulings—but “safe
harbour” typically requires platforms take swift takedown action.
Q3. What direct remedies do victims have?
Victims may seek injunctive relief (takedowns), criminal prosecution, civil damages, and official clarifications
of falsity.
Q4. Are any “protected” uses of deepfakes allowed?
Yes, for clear parody, satire, news, or art, provided they don’t cause specific, demonstrable harm or breach
consent.
Q5. What is the future of deepfake regulation?
Likely: universal labelling
, stricter intermediary obligations, specialized criminal offenses, cross-border treaties,
and rapid-response forensic/technical systems.

Exit mobile version