Author: Avinash Pandey ( IILM University )
To the Point:
Deepfake technology, powered by artificial intelligence (AI), has redefined how audio-visual content is created and consumed. While it holds vast potential for entertainment and education, it also poses a grave risk to personal privacy, democratic processes, public order, and national security. The absence of a dedicated legal framework in India has rendered the country vulnerable to misuse of this technology. This article critically examines India’s legal preparedness in handling deepfakes, identifies existing gaps, and proposes legal reforms to regulate this modern menace.
Use of Legal Jargon:
The article incorporates relevant legal terms such as mens rea (criminal intent), in personam liability (personal responsibility), data fiduciary, right to be forgotten, prima facie, res ipsa loquitur (the thing speaks for itself), and fair use doctrine, providing readers with precise legal insight into the emerging jurisprudence around synthetic media.
The Proof:
India has seen a concerning rise in deepfake incidents. A prominent example is the morphed video of actress Rashmika Mandanna, which went viral in 2023, raising serious concerns about digital consent and gender-based cybercrimes. In a similar vein, a deepfake featuring a prominent politician making false communal statements emerged prior to the 2024 general elections, posing a risk to electoral integrity. These examples highlight the destructive potential of deepfakes when combined with the viral nature of social media platforms, thereby exposing vulnerabilities in India’s legal system and cyber infrastructure.
Deepfakes are often used in phishing scams, reputational attacks, and misinformation campaigns. As reported by Deeptrace, a cybersecurity company, the global number of deepfake videos available online surged by 330% within just one year. While no official data is available in India, anecdotal evidence and news coverage suggest similar trends. The National Crime Records Bureau (NCRB) must begin tracking cybercrimes involving AI-generated media to provide empirical foundations for legal policymaking.
Abstract:
Deepfakes, a combination of the terms “deep learning” and “fake”, denote hyper-realistic synthetic media produced by artificial intelligence, frequently without the awareness or consent of the individual depicted. These digital manipulations pose serious challenges in legal identification, enforcement, and redressal. This article evaluates how current Indian legislation, including the Information Technology Act, 2000; the Indian Penal Code, 1860; and the Digital Personal Data Protection Act, 2023, cope with the threats posed by deepfakes. It further highlights international legislative models, judicial insights, and policy recommendations to bridge the existing gaps. The article aims to balance the rapid technological advancements with the principles of digital justice, privacy, and freedom of expression.
Case Laws:
1. K.S. Puttaswamy v. Union of India (2017) 10 SCC 1
The Supreme Court acknowledged privacy as a fundamental right under Article 21. Deepfakes, especially involving intimate content or unauthorized biometric data, violate informational privacy and bodily autonomy, as envisioned in this landmark ruling.
2. Shreya Singhal v. Union of India (2015) 5 SCC 1
The striking down of Section 66A of the IT Act upheld free speech but also highlighted the vacuum in addressing harmful online content, including deepfakes that may not fit traditional definitions of hate or obscenity.
3. Avnish Bajaj v. State (NCT of Delhi) (2008)
This case, known for intermediary liability, placed responsibility on online platforms like Bazee.com for failing to prevent the circulation of obscene content. It serves as a precedent for social media accountability in deepfake proliferation.
4. Khushboo v. Kanniamal (2010) 5 SCC 600
This case reinforced the idea that freedom of speech cannot be curtailed based on subjective morality, yet it also paved the way for delineating the fine line between expression and malicious falsehoods like deepfakes.
Current Legal Framework and Gaps:
1. Information Technology Act, 2000
The IT Act remains the principal legislation regulating cybercrimes in India. Section 66D, dealing with impersonation using electronic means, may loosely apply to deepfakes. Sections 67 and 67A, related to obscene material, could apply when deepfakes involve non-consensual pornography. However, the Act lacks provisions that specifically address the creation, dissemination, or criminal intent behind synthetic content.
2. Indian Penal Code, 1860
Sections 292 (obscenity), 499 (defamation), 500 (punishment for defamation), and 503 (criminal intimidation) may provide limited recourse. The challenge lies in applying traditional legal concepts to AI-generated acts without identifiable human authors.
3. Digital Personal Data Protection Act, 2023
The Act addresses consent and data protection, especially relating to sensitive personal data like biometrics. Deepfakes involving facial recognition or voice cloning could breach provisions under this Act, but enforcement remains weak due to the lack of express provisions on AI-generated content.
4 IT Regulations, 2021 (Guidelines for Intermediaries and Code of Ethics for Digital Media)
These regulations require platforms to eliminate content within 36 hours of receiving a notice. Yet, given the speed of virality, real-time detection and takedown of deepfakes remain a serious operational and technological challenge.
Comparative Jurisprudence and Global Developments:
United States: Some states, like California and Texas, have enacted laws penalizing deepfakes used to influence elections or produce explicit content without consent. The DEEPFAKES Accountability Act was introduced to mandate watermarks on synthetic media.
European Union: Under the proposed EU AI Act, deepfakes are classified as “high-risk” content, and their dissemination without clear labeling is prohibited.
China: In 2022, China implemented rules requiring all synthetic media to carry visible disclosures, with criminal liability for failure to do so.
India can draw from these global approaches while tailoring solutions that are constitutionally sound, respecting the fundamental rights of free speech and privacy.
Proposed Reforms:
1. Statutory Definition of Deepfakes – Introduce a legal definition of deepfakes under the IT Act or a proposed AI law to distinguish between harmless synthetic media and malicious intent.
2. Dedicated Offences – Create penal provisions specific to malicious deepfakes, including imprisonment and fines, especially in cases involving defamation, fraud, or sexual exploitation.
3. Right to Digital Integrity – Codify a new right protecting individuals from unauthorized digital manipulation of their image or voice.
4. Technological Collaboration – Foster partnerships with AI companies to create open-source tools for deepfake detection and verification.
5. Cyber Forensics Capacity Building – Empower cyber police with training, infrastructure, and real-time monitoring tools to trace deepfake origin and usage.
6. Amendment in Intermediary Rules – Make proactive AI-based scanning mandatory for platforms with large user bases and integrate rapid complaint redressal mechanisms.
7. Public-Private Task Forces – Create interdisciplinary committees including technologists, legal scholars, civil society actors, and enforcement agencies to oversee AI governance.
Ethical and Social Implications:
Deepfakes blur the lines between truth and fabrication, raising moral concerns around trust, consent, and authenticity. In a polarized society, misinformation powered by deepfakes can incite communal tensions, damage reputations irreversibly, and skew public perception. In gendered abuse, deepfakes are increasingly weaponized against women, infringing their bodily autonomy and perpetuating misogyny. A robust legal framework must account for these vulnerabilities while ensuring democratic freedoms are not arbitrarily curtailed.
Moreover, the psychological toll on victims cannot be ignored. Individuals subjected to deepfake exploitation may suffer trauma, job loss, social ostracism, or reputational ruin. Without clear legal recognition, they may be left without adequate compensation or redressal mechanisms. Civil liability for emotional distress, as seen in jurisdictions like the UK and the US, should be considered in India to compensate victims meaningfully.
Conclusion:
Deepfakes represent a dual-edged sword in the digital age. While they can be used for satire, cinema, or innovation, their misuse threatens personal rights, democracy, and the very nature of truth. India’s legal system, though equipped with general laws, lacks the sophistication required to deal with AI-generated threats. Legislators must act swiftly to introduce precise legal instruments, enhance enforcement capacity, and raise public awareness. The fight against deepfakes is not just legal—it is societal, ethical, and urgent.
Legal education also plays a vital role. Law schools and training institutions must include modules on digital rights, AI governance, and cyber law so that the next generation of legal professionals is prepared for the complex realities of AI-generated content. Only a multi-stakeholder and forward-thinking approach can ensure that India remains both a digital powerhouse and a bastion of rule of law.
FAQs:
Q1. Are deepfakes currently illegal in India?
While not directly criminalized, deepfakes can be prosecuted under IPC or IT Act provisions like defamation, obscenity, or impersonation.
Q2. How do deepfakes affect elections and democracy?
Deepfakes can be weaponized to spread misinformation, fabricate speeches, or impersonate leaders, eroding public trust in democratic institutions.
Q3. What legal remedies exist for victims of deepfakes?
Victims can file complaints under IPC/IT Act provisions, approach cyber cells, or seek injunctions to remove content and initiate civil suits for damages.
Q4. Are tech companies liable for deepfake content?
Intermediaries enjoy safe harbor but lose it if they fail to act upon receiving notice. Liability depends on the degree of control and response time.
Q5. Can AI detect deepfakes?
Yes. Emerging technologies use inconsistencies in facial expressions, blinking rates, and audio mismatches to flag synthetic content.
Q6. Is India considering any deepfake-specific law?
As of now, no dedicated deepfake law exists, but policy discussions are ongoing within MeitY and NITI Aayog on AI governance.
Q7. How can individuals protect themselves from being deepfaked?
Avoid sharing biometric data online, enable two-factor authentication, watermark personal videos, and stay vigilant about digital identity misuse.
Q8. Can deepfakes be used as legal evidence?
Yes, but they require rigorous authentication. Courts may admit deepfake evidence only if validated by digital forensic experts. Conversely, courts must also guard against fake videos being introduced fraudulently as real evidence.
