Author: Shibrah Aftab Khan, a law student at University of Kashmir.
Abstract
Algorithmic governance—the use of AI systems to automate state decisions—is reshaping India’s legal landscape. From facial recognition in policing to Aadhaar-linked welfare exclusions, these tools promise efficiency but risk eroding constitutional safeguards. This article explores how opaque algorithms threaten privacy, equality, and due process, drawing on cases like Puttaswamy and Shreya Singhal. It argues that India’s outdated laws and lack of AI regulation leave citizens vulnerable, urging reforms for transparency, accountability, and human oversight.
Introduction
In a small village in Rajasthan, a widow is denied her monthly ration because an Aadhaar-based algorithm flags her biometric data as “mismatched.” Across India, algorithms are quietly making decisions that alter lives—decisions once made by humans. As someone who has studied digital rights in the Puttaswamy and Pegasus cases, I’ve seen how technology can outpace the law. Now, as AI infiltrates governance, we must ask: Can India’s legal framework protect citizens from the biases and errors of machines?
The Rise of AI in Indian Governance
India’s turn toward algorithmic governance began with ambitious digitization projects. In policing, facial recognition systems like Delhi’s FRT and Hyderabad’s TSCOP scan crowds in real time, aiming to identify criminals. Yet studies show these tools disproportionately misidentify women and darker-skinned individuals, turning innocent citizens into suspects. Predictive policing models, part of the Crime and Criminal Tracking Network System (CCTNS), analyze historical crime data to deploy officers—a practice that risks reinforcing biases against marginalized communities already over-policed for decades.
Meanwhile, Aadhaar’s integration into welfare systems has created a different crisis. In 2021, Rajasthan’s food subsidy program excluded 1.2 million families due to biometric errors or server glitches, leaving many without rations for months. Courts, too, are experimenting with AI to prioritize cases, but critics fear this could sideline vulnerable litigants whose disputes require human empathy. The common thread? Citizens harmed by these systems often have no way to challenge the algorithm’s logic—or even understand it.
Legal Issues Raised by Algorithmic Governance
The constitutional cracks in India’s AI experiment are widening. Take privacy: the Puttaswamy judgment (2017) made privacy a fundamental right, requiring state surveillance to be necessary and proportionate. Yet facial recognition systems collect data indiscriminately, scanning millions to find a handful of suspects. This dragnet approach, critics argue, fails the proportionality test, treating every citizen as a potential criminal.
Equality is another casualty. AI systems trained on biased data replicate societal prejudices. Predictive policing tools, for instance, direct officers to low-income neighborhoods based on past arrests—ignoring that over-policing, not crime rates, drives those numbers. The Supreme Court’s Navtej Singh Johar verdict (2018) condemned laws perpetuating stereotypes, but who holds algorithms accountable for doing the same?
Free speech suffers too. When Hyderabad police used FRT to monitor protests in 2022, activists reported self-censorship, fearing retribution. The Shreya Singhal judgment (2015) struck down vague internet censorship laws, but AI-driven surveillance operates in secrecy, leaving citizens guessing how their data is used.
Most alarmingly, automated decisions lack due process. If a welfare algorithm denies benefits, there’s no human to plead with, no form to contest the error. The “black box” nature of AI—what scholars call algorithmic opacity—means even judges struggle to scrutinize these systems.
Key Case Laws
India’s courts are beginning to grapple with these challenges. The Puttaswamy verdict laid the groundwork, declaring privacy a right and mandating proportionality in state actions. In 2022, the Telangana High Court heard S.Q. Masood v. State of Telangana, a PIL challenging the legality of facial recognition technology (FRT) used by the state police. Petitioners argue that without adequate legal safeguards, FRT violates privacy and enables mass surveillance—a concern echoed in the Aadhaar judgment (2018), where the Supreme Court warned against exclusionary technologies.
The recent Pegasus proceedings add another layer to this debate. In April 2025, the Supreme Court questioned, “What’s wrong if a country is using a spyware?”, emphasizing that the legality of surveillance hinges on its targets—national security versus civil society. While the Court declined to publicly disclose the technical committee’s findings (citing security risks), it noted individual grievances could be addressed. This stance underscores a recurring tension: the state’s security claims often override transparency, even as procedural safeguards under Puttaswamy demand accountability. The Court’s reluctance to confront systemic surveillance excesses, as seen in its deferral of the Pegasus hearing to July 2025, leaves citizens in limbo, reliant on patchwork remedies rather than systemic reform.
Internationally, the U.S. case Loomis v. Wisconsin (2016) offers a cautionary tale. A defendant sentenced using a proprietary AI tool argued the algorithm’s secrecy denied him a fair trial. Though the court upheld the sentence, it urged transparency—a principle India’s lawmakers have yet to embrace.
Is India Legally Ready for AI?
The short answer: no. India’s Digital Personal Data Protection Act (2023) focuses on data privacy but sidesteps algorithmic accountability. The IT Act, 2000, is a relic of the dial-up era, irrelevant to AI’s ethical dilemmas. Courts, meanwhile, often defer to the executive on technical matters, as seen in the Pegasus case (2021), where the government refused to disclose surveillance details.
To bridge this gap, experts propose a three-pronged approach. First, a “right to explanation” would let citizens demand clarity on AI decisions. Second, human oversight could prevent algorithmic errors from becoming human tragedies—imagine a welfare officer reviewing automated denials. Third, independent audits, akin to financial audits, could expose biased code before it harms marginalized groups. Above all, India needs a law regulating AI in governance, balancing innovation with constitutional values.
Conclusion
In 2020, a farmer in Jharkhand died of starvation after Aadhaar glitches blocked his ration card. His story is a grim reminder: when algorithms govern, human lives hang in the balance. India’s legal framework, built for a pre-digital age, must evolve to ensure technology serves justice—not the other way around. As the Puttaswamy Court affirmed, dignity is non-negotiable. No one should lose their rights because a machine got it wrong.
FAQs
- What is algorithmic governance?
It’s the use of AI systems by governments to automate decisions in areas like policing, welfare, and public services.
- Are AI tools like facial recognition legal in India?
Currently, there’s no specific law regulating FRT. Courts are reviewing its constitutionality in cases like S.Q. Masood v. State of Telangana.
- What problems can AI cause in government decision-making?
Bias, lack of transparency, errors with no appeal process, and chilling effects on free speech.
- Has any court in India looked into this issue yet?
Yes. The Delhi High Court is hearing challenges to facial recognition, while the Supreme Court’s Aadhaar and Puttaswamy judgments set key privacy principles. The Pegasus case, now scheduled for July 2025, highlights ongoing tensions between surveillance and rights.
- What can be done to make AI use in governance safer?
Enact laws mandating transparency, human oversight, and bias audits, and empower citizens to challenge algorithmic decisions.
References
1. Justice K.S. Puttaswamy (Retd.) vs. Union of India (2017) 10 SCC 1
Supreme Court of India judgment affirming the right to privacy as a fundamental right.
Link: https://indiankanoon.org/doc/91938676/
2. Shreya Singhal vs. Union of India (2015) 5 SCC 1
Landmark case that struck down Section 66A of the IT Act, reinforcing free speech rights.
Link: https://indiankanoon.org/doc/110813550/
3. Navtej Singh Johar vs. Union of India (2018) 10 SCC 1
Supreme Court ruling that decriminalized homosexuality and emphasized dignity and equality.
Link: https://indiankanoon.org/doc/168671544/
4. Facial Recognition in India – Internet Freedom Foundation
A detailed analysis of India’s use of FRT in policing and civil liberties concerns.
Link: https://internetfreedom.in/the-facial-recognition-project/
5. Aadhaar and Welfare Exclusion – The Wire
Article highlighting exclusion from welfare schemes due to Aadhaar-linked biometric errors.
Link: https://thewire.in/rights/aadhaar-biometric-exclusion-welfare
6. Predictive Policing and CCTNS – Vidhi Centre for Legal Policy
Analysis of CCTNS and ethical concerns surrounding predictive policing.
Link: https://vidhilegalpolicy.in/research/the-cctns-and-the-future-of-predictive-policing-in-india/
7. Algorithmic Governance and Due Process – Internet Governance Project
General concepts and concerns about black-box AI and legal accountability.
Link: https://www.internetgovernance.org/2020/07/14/algorithmic-governance-and-the-rule-of-law/
8. Pegasus Row: Supreme Court Says Won’t Disclose Report That Touches Country’s Security, Sovereignty – The Hindu
Report on the Supreme Court’s stance regarding disclosure in the Pegasus surveillance case.
Link: https://www.thehindu.com/news/national/pegasus-row-supreme-court-says-wont-disclose-report-that-touches-countrys-security-sovereignty/article69504285.ece