Site icon Lawful Legal

Digital Doppelgangers: The Legal Challenge of Deepfakes and Misinformation News in India

Author: Saif Alam, A student at National Law University, Visakhapatnam

To the point

The legitimacy of democratic processes and the rule of law are directly threatened by the deepfakes and false information that are spreading so quickly in India’s news and social media ecosystems. Deepfakes are artificial intelligence (AI)-generated synthetic media that allow malicious characters to create realistic-sounding audio, video, or images, making it more difficult to distinguish fact from fake news in public discourse. The legal system is still disorganized and because of outdated processes and ineffective legislative action, offenders routinely avoid serious consequences while public confidence is damaged. Legal analysis reveals how important democratic institutions are now at risk of harm that has never been witnessed in terms of scope and complexity due to a lack of specific regulations, unclear evidentiary standards for digital manipulation, and lengthy investigations.

Use of Legal Jargon

Legal terms like “criminal defamation,” “electronic evidence authenticity,” and provisions of the Information Technology Act, 2000 (IT Act) that are specifically related to manipulating digital content are at the center of the discussion along with Indian Penal Code, 1860 and Digital Personal Data Protection Act, 2023. Insufficiently addressing new AI threats, previous Supreme Court jurisprudence depended on traditional media laws and Section 66 of the IT Act. Important phrases like “malicious misrepresentation,” “intent to mislead,” and “algorithmic accountability” are now frequently used in court arguments and legislative reform initiatives. “Public interest” and “digital due diligence” have become crucial in redefining individual rights and regulatory obligations in India’s digital sphere because of recent court rulings.

The Proof

The risks of deepfakes in Indian public life, particularly during elections and political campaigns, have been brought to light by several recent high-profile incidents. While the original speech was only recorded in Hindi, an AI-generated video surfaced during the Delhi assembly election campaign in 2020 showing politician Manoj Tiwari appearing to address voters in multiple languages. This raised concerns about the unapproved use of a leader’s likeness and the possibility of misinformation going viral. Amit Shah, the home minister, was falsely shown making communal statements in a deepfake video that went viral during the 2024 Lok Sabha elections. This led to arrests and contentious discussions about the integrity of the election. In another well-known instance, prime minister Narendra Modi was falsely accused of making controversial remarks during delicate communal times in manipulated videos that circulated on social media.
Deepfakes have also been used in celebrity blackmail cases outside of politics, like the one involving actress Rashmika Mandanna, in which she was harassed and defamed online using digitally altered photos. Legal procedures have been made more challenging by investigative agencies’ inability to identify the producers and distributors of such content due to their frequent lack of technical competence. According to a World Economic Forum survey published in January, the risk to India from misinformation is seen higher than the risk from infectious diseases or illicit economic activity in the next two years. To combat the weaponization of synthetic media in both public and private domains, judicial and parliamentary reviews have emphasized the necessity of new forensic procedures and quicker response mechanisms.

Abstract

The introduction of deepfakes and false information into India’s media exposes systemic flaws in the institutional and legal protections for justice and the truth. In political discourse, entertainment, and digital activism, deepfakes’ synthetic media that use artificial intelligence (AI) to create realistic images, audio, or video have eroded the lines separating manipulation from reality. While existing statutes like the Information Technology Act, 2000, Indian Penal Code, 1860, and Digital Personal Data Protection Act, 2023, provide limited protection against counterfeiting, defamation, and privacy invasions, the abstract emphasizes how Indian law is unable to keep up with the technical complexity and societal impact of deepfakes. A new era of judicial activism has been marked by recent court decisions, public interest lawsuits, and historic Delhi High Court decisions, especially in defending electoral integrity, personality rights, and dignity against deepfake harms.


Case Laws

1. Kartar Singh v. State of Punjab (1994)
– This Supreme Court case set significant precedent for the admissibility and authentication of electronic evidence in criminal trials. It emphasized the importance of demonstrating the genuineness and originality of digital evidence, principles that are essential in the prosecution of crimes involving deepfakes. Because deepfakes can change audio-visual materials, they make evidentiary standards more challenging to follow and create issues with forensic verification and the digital chain of custody. This case is one of the reasons for calls to update evidence law frameworks to bring conventional evidentiary standards into line with modern technological realities.

2. State of Tamil Nadu v. Suhas Katti (2004)
– Known as India’s first conviction under the IT Act for cyber harassment, this case involved online defamation and the distribution of obscene electronic material. The conviction under Section 67 of the IT Act showed that offenses mediated by digital means could be prosecuted quickly. Though the criminal justice system faces complex challenges due to issues of intent, jurisdiction, and technical manipulation, the verdict also highlighted limitations in addressing more complex digital harms like deepfakes. The case is a crucial benchmark for comprehending the possibilities and limitations of current legislation in the regulation of digital content.

3. Shreya Singhal v. Union of India (2015)
– In this historic decision, the Supreme Court declared that Section 66A of the Information Technology Act was unconstitutional because it was ambiguous and restricted free speech. The Court underlined that to prevent abuse, any legislation restricting digital expression must be precise and transparent. The decision sets the constitutional foundation for regulating digital content in India, even though it did not specifically address deepfakes. It is now difficult for legislators and courts to create legislation that addresses new problems like deepfakes without violating fundamental rights. Additionally, the decision accidentally left a regulatory gap for “online falsehoods,” such as synthetic media, which frequently bypass current legal scrutiny because there are no specific statutes in place.

Conclusion

Deepfake manipulation and misinformation represent a serious, modern threat to India’s democratic institutions and constitutional rights. Through comprehensive, innovative statutes, the legal framework must confront the AI-driven evolution of digital communication directly and move beyond gradual reform. Accountability for digital harms requires establishing strong standards for electronic evidence, encouraging technological proficiency among litigators, and guaranteeing open investigation and speedy judicial review. To ensure that India’s fundamental democratic principles endure in this age of artificial manipulation and to preserve public trust in the political and legal systems, consistent statutory and institutional reform is required.

FAQS

1. What are deepfakes and why are they a legal concern in India?
Deepfakes are artificial intelligence (AI)-generated fake audio or video that resembles the appearance of real people, frequently to spread false information or attack specific people. Because they spread quickly on social media, they pose a threat to public trust, privacy, and democratic discourse in India. Deepfakes are not specifically addressed by the current legal framework under the IT Act, IPC, and DPDP Act, which makes it difficult to address their particular harms and establish accountability.

2. How does existing Indian law address deepfake-related offenses?
The Information Technology Act penalizes identity theft (Section 66C), cheating by impersonation (Section 66D), and transmission of obscene digital content (Sections 67, 67A). IPC provisions on defamation (Sections 499, 500) and criminal intimidation also apply. The Digital Personal Data Protection Act requires consent for processing personal data, including biometric data used in deepfakes. However, no law explicitly targets AI-generated synthetic media, making enforcement and prosecution difficult.

3. What challenges do authorities face in prosecuting deepfake cases?
Because deepfakes use complex manipulation techniques and lack forensic capabilities, it is difficult to prove their authenticity and intent. Cross-border sharing raises legal concerns, and maintaining digital evidence in the face of frequent reuploads is challenging. Action is delayed because many courts and investigative agencies lack AI forensic expertise. Additionally, platform safe harbour laws reduce pressure on intermediaries to proactively detect or remove deepfakes swiftly.

4. How do Indian courts view freedom of speech versus regulation of deepfakes?
The Supreme Court’s 2015 Shreya Singhal judgment stresses precise laws restricting digital speech to avoid arbitrary suppression. Courts understand that protecting people from harm caused by fake content must be balanced with protecting fundamental rights. Regulation of deepfakes might be necessary, but laws protecting free speech must be clear and specifically designed, which makes them complicated. In India, striking this balance continues to be a major judicial and legislative challenge.

5. What reforms are proposed to better tackle deepfakes in India?
Experts support specific legislation that clearly defines and punishes malicious or non-consensual deepfakes. To avoid “safe harbour” protections, changes to intermediary liability regulations should require the prompt removal of verified deepfakes. It is essential that law enforcement improve its digital forensics capabilities and AI literacy. For a thorough response, public awareness initiatives and international collaboration on cross-border enforcement are also advised.

Sources

1.https://economictimes.indiatimes.com/news/india/ai-and-deepfakes-unveiling-the-dark-side-of-election-campaigns-in-india/articleshow/110169142.cms?from=mdr
2.https://www.reuters.com/world/india/dance-videos-modi-rival-turn-up-ai-heat-india-election-2024-05-16/

Exit mobile version