THE MISINFORMATION FRONTLINE: INDIA’S LEGAL RESPONSE TO ONLINE FALSEHOODS

Author: Sourya Veer Pratap Deo, Xavier Institute of Management

Abstract


The April 2025 Pahalgam terror attack and India’s subsequent Operation Sindoor triggered not only a military standoff but a fierce digital misinformation war. Deepfakes, AI-generated images, recycled war footage, and fake narratives circulated rapidly across social media, inflaming public sentiment and distorting facts. Both Indian and Pakistani sources disseminated misleading visuals and claims — some of which were traced back to unrelated conflicts or digitally altered media.
In response, the Indian government invoked its legal framework under the Information Technology Act, 2000 and the IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. It ordered content takedowns, account blocks, and banned several Pakistani YouTube news channels. Platforms like X were directed to withhold thousands of accounts spreading false narratives. However, these measures reignited constitutional debates around freedom of speech and state overreach.
A key controversy surrounded the 2023 amendment introducing a government Fact-Check Unit, empowered to flag any content related to government business as “fake or misleading.” The Bombay High Court struck this rule down in 2024 for violating Articles 14 and 19, with the Supreme Court staying its implementation pending final judgment.
Meanwhile, India promoted voluntary and collaborative approaches through initiatives like the Deepfakes Analysis Unit, encouraging public reporting and expert verification of manipulated content. While international legal models such as the U.S.’s “Take It Down Act” focus on narrow harms, India faces the challenge of striking a balance between national security, misinformation control, and the fundamental right to free expression.
This article examines how Indian law navigates that balance — through statutory powers, judicial oversight, and evolving regulatory framework
 


To the Point
The 2025 Pahalgam terror attack and India’s subsequent Operation Sindoor triggered an unprecedented digital misinformation surge, spreading rapidly across X (formerly Twitter), WhatsApp, Facebook, and YouTube.
Both Indian and Pakistani entities circulated misleading content: AI-generated images, deepfake videos, miscaptioned footage from unrelated conflicts, and repurposed old crash videos—all designed to manipulate public perception.
A fake video of a Pakistani army official admitting aircraft losses, and an AI-generated image showing Rawalpindi stadium in ruins, were widely shared before being debunked.
The Indian government invoked Section 69A of the IT Act, 2000, ordering social media platforms to block over 8,000 accounts accused of spreading disinformation during the conflict.
Pakistan-based YouTube news channels (16 in total) were banned in India for broadcasting “false narratives” and inflammatory content relating to military strikes and communal unrest.
The IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, empower the government to direct takedowns and require platforms to appoint grievance redressal officers and publish transparency reports.
A 2023 amendment attempted to create a government-run Fact-Check Unit for content related to government business. It was declared unconstitutional by the Bombay High Court in 2024, citing vague terminology and violation of Articles 14 and 19.
The government also promoted voluntary compliance, including support for the Misinformation Combat Alliance’s Deepfakes Analysis Unit, which allows citizens to report suspicious content for expert verification.
Artificial intelligence played a central role in producing synthetic media during the conflict, making detection and debunking more difficult for both users and platforms.
Internationally, countries like the United States have passed specific laws to combat deepfakes and harmful content, showing a trend toward targeted but balanced regulation.
In India, concerns remain about platform accountability, potential overreach of executive powers, and lack of procedural safeguards in proposed laws like the Broadcasting Services Bill, 2024.
Going forward, any regulation must balance misinformation control with free speech protections, maintain judicial oversight, and promote digital literacy and transparency through collaborative public-private models.

The Proof
India’s primary legal framework to regulate online misinformation is the Information Technology Act, 2000. Two key provisions form its backbone.

Section 69A empowers the central government to issue directions to block online content in the interest of sovereignty, national security, public order, or relations with foreign states.

Section 79 offers “safe harbor” protection to intermediaries (such as social media platforms), shielding them from liability for user-generated content—provided they act with “due diligence” and comply with takedown notices issued under lawful authority.

The IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 give practical effect to these provisions. They require major platforms to appoint grievance officers, publish compliance reports, and remove illegal content (e.g. hate speech, defamation, incitement) upon notice. These Rules also apply to digital news publishers.

In 2023, a controversial amendment to these Rules introduced a government-run Fact-Check Unit (FCU), empowered to label content related to government affairs as “fake or misleading.” Intermediaries were then required to either remove or flag such content—or risk losing their safe harbor protections under Section 79.

This amendment was challenged in the Bombay High Court in Kunal Kamra v. Union of India. In January 2024, a majority of the bench ruled that the FCU provision was unconstitutional for violating Article 19(1)(a) (freedom of speech) and Article 14 (equality). The Court found the rule vague, overbroad, and lacking procedural safeguards. It held that subjective terms like “fake” and “government business” granted excessive discretionary power to the executive. The Supreme Court later stayed the implementation of the FCU, noting it raised serious constitutional concerns.

This judgment followed the landmark Shreya Singhal v. Union of India (2015) ruling, where the Supreme Court struck down Section 66A of the IT Act for being vague and disproportionately restricting online speech. That decision remains the standard for determining constitutional limits on digital speech regulation.

During the 2025 crisis, India exercised its powers under Section 69A to block over 8,000 social media accounts alleged to have spread disinformation about Operation Sindoor. The government also took action against 16 Pakistani YouTube channels accused of promoting inflammatory and misleading narratives.

While criminal provisions like Sections 124A (sedition), 153A (promoting enmity), and 505 (public mischief) of the Indian Penal Code can apply in egregious cases, post-Shreya Singhal, the emphasis has shifted towards content removal and digital regulation rather than criminal prosecution.

Additionally, the proposed Broadcasting Services Bill, 2024 aims to classify certain online creators as “digital news broadcasters,” subjecting them to registration and compliance with content standards—raising fresh concerns about regulation overreach.

In summary, while India has robust statutory tools to combat digital falsehoods, constitutional safeguards and judicial oversight remain vital to ensure that these powers are not misused to suppress legitimate dissent or critical journalism.


Case Laws
1. Shreya Singhal v. Union of India (2015)
This foundational Supreme Court case struck down Section 66A of the Information Technology Act, which had criminalized online speech deemed “offensive” or “annoying.” The Court held that the provision was vague, overly broad, and had a chilling effect on free speech, thereby violating Article 19(1)(a) of the Constitution. The ruling emphasized that restrictions on speech must be narrowly tailored and pass the test of reasonableness under Article 19(2).

2. Kunal Kamra v. Union of India (2024)
In this case, the Bombay High Court scrutinized the constitutional validity of the 2023 amendment to the IT Rules, which authorized a government-designated Fact-Check Unit (FCU) to independently classify online content related to “government business” as fake or misleading. The Court found the provision to be vague, lacking procedural safeguards, and prone to misuse. Accordingly, it struck down the amendment for contravening Articles 14 and 19(1)(a) of the Constitution. The judgment reaffirmed that executive control over truth assessment, without judicial safeguards, cannot be permitted in a constitutional democracy.

3. Anuradha Bhasin v. Union of India (2020)
The Supreme Court ruled that the right to freedom of speech and the right to carry on trade or business over the internet is constitutionally protected. This legal challenge arose amid ongoing restrictions on internet access in Jammu & Kashmir. The Court held that restrictions on internet access must be lawful, necessary, and proportionate, and must be periodically reviewed. This precedent directly informs the legality of content blocking and platform restrictions under Section 69A.

4. Facebook India v. Union of India (2022)
This Delhi High Court case addressed intermediary liability and user privacy. While the core issue involved WhatsApp traceability under the 2021 IT Rules, the case raised important questions about balancing platform obligations with fundamental rights. The matter underscored that intermediary regulation must be constitutional, transparent, and not compromise user rights arbitrarily.

Conclusion


The recent India–Pakistan conflict has spotlighted the potent dangers of digital misinformation—particularly with the advent of AI-generated content, deepfakes, and viral propaganda. This information warfare not only poses national security threats but also disrupts public trust and inflames communal tensions. In response, the Indian government has actively invoked existing cyber laws, issued blocking orders, and pushed for stricter oversight on digital platforms.
Legally, India possesses a robust toolkit under the IT Act, 2000 and the 2021 IT Rules. However, constitutional challenges—especially around freedom of expression—have placed limits on executive overreach. Courts have reiterated that regulation of speech must be narrowly defined, proportionate, and subject to judicial scrutiny.
The real challenge lies in striking a sustainable balance. On one hand, there is a legitimate need to curb falsehoods that may incite violence or compromise national interests. On the other hand, unchecked regulation risks stifling dissent, satire, and journalistic inquiry. Future policies—such as the proposed Digital India Act or Broadcasting Bill—must therefore ensure precision in scope and procedural fairness.
Going forward, a collaborative approach involving legal reforms, platform accountability, civil society engagement, and public awareness is essential. While technology has enabled the rapid spread of disinformation, it can also be harnessed for early detection and verification. India’s legal system must continue to evolve—firm in guarding both its democratic values and its digital frontiers.



FAQS


1. What legal powers does the Indian government have to block online content?
Under Section 69A of the IT Act, 2000, the government can direct intermediaries to block access to content that threatens sovereignty, public order, or national security. This provision is commonly invoked during crises.

2. Are social media platforms legally liable for user-generated misinformation?
Platforms enjoy “safe harbor” under Section 79 of the IT Act if they act with due diligence and follow takedown orders. Failure to comply can strip them of this immunity.

3. Was the government Fact-Check Unit legally valid?
The 2023 amendment to the IT Rules establishing the FCU was struck down by the Bombay High Court for being vague and unconstitutional. The Supreme Court has stayed its enforcement pending further review.

4. Can misinformation be criminally prosecuted in India?
Yes, in extreme cases. Sections like 153A, 505, or 499 of the IPC may apply. However, broad charges like sedition (124A) are now rarely used after judicial scrutiny in earlier cases.



5. What is the Broadcasting Services Bill, 2024?
It proposes to regulate online content creators and influencers as “digital broadcasters,” requiring registration and adherence to content codes. Critics warn it may overregulate individual speech.

6. How can individuals verify content online?
Citizens can use independent fact-checking platforms or submit suspicious media to public initiatives like the Deepfakes Analysis Unit. Verifying with official sources before sharing is encouraged.

7. Is freedom of speech absolute in India?
No. While Article 19(1)(a) guarantees free speech, it can be reasonably restricted under Article 19(2) for interests like public order, decency, or security.

Leave a Reply

Your email address will not be published. Required fields are marked *