By Sanjith Gurikar, a student at PES University
To the Point
India stands on the brink of an artificial intelligence (AI) revolution. From predictive policing and facial recognition to automated welfare distribution and credit scoring, AI is rapidly being integrated into the machinery of governance and private enterprise. But while the promises of efficiency, scale, and objectivity dominate the national discourse, the quiet shadow of algorithmic injustice has gone largely unexamined in law and policy. As machine-learning algorithms increasingly determine who gets hired, who gets a loan, or who gets surveilled, the absence of constitutional safeguards and regulatory oversight raises pressing concerns. India’s legal system, particularly its constitutional framework, has yet to evolve to address the unique challenges posed by AI. While countries such as the European Union have adopted a risk-based regulatory approach emphasizing transparency, fairness, and accountability, India’s AI governance remains vague, fragmented, and largely technocratic. The absence of explicit protections against automated discrimination, the lack of a statutory right to explanation, and the failure to subject AI systems to constitutional scrutiny leave serious blind spots in the regulatory architecture. This article examines these blind spots, explores the constitutional implications of unregulated algorithmic systems, and calls for a rights-based, accountable AI regulatory regime rooted in India’s constitutional ethos.
Use of Legal Jargon
Algorithmic Discrimination refers to the biased or unequal treatment of individuals or groups through AI systems, either because of biased training data or flawed model design. This violates the constitutional guarantee of equality before the law under Article 14.
Automated Decision-Making (ADM) encompasses decisions made solely by algorithmic processes, often without human oversight. In the absence of procedural fairness, such decisions raise concerns under Article 21 of the Constitution, which guarantees the right to life and personal liberty.
Profiling involves analyzing data to infer personal characteristics, often used in surveillance and commercial targeting. It can violate privacy rights under Article 21, especially when done without consent or transparency.
The Right to Explanation, enshrined in the European Union’s GDPR (Article 22), mandates that individuals be informed of the logic behind automated decisions that significantly affect them. India lacks an equivalent right.
Due Process is a constitutional principle requiring that any action affecting rights must be fair, just, and reasonable. This is central to evaluating the legality of AI systems used in governance and criminal justice.
Non-Delegation Doctrine refers to the constitutional principle that essential legislative functions cannot be delegated to private entities—an issue when the government outsources public decision-making to private AI vendors.
The Proof
India has begun deploying AI across sectors critical to citizen rights and welfare. Yet, these systems are often opaque, poorly regulated, and lacking in democratic accountability. One striking example is the use of Facial Recognition Systems (FRS) by police forces and law enforcement agencies. In 2020, the Delhi Police deployed FRS during the anti-CAA protests, claiming it was used to identify violent protestors. However, there was no legislative backing for this technology, no public disclosure of accuracy rates, and no safeguards against misuse. Multiple studies globally have shown that facial recognition algorithms have significantly higher error rates for women and people with darker skin tones—raising constitutional concerns under Articles 14 and 21. Another example is AI-based credit scoring, now used by banks and fintech firms to determine loan eligibility using alternative data—like social media behavior and location data. These systems often function as black boxes, with no way for users to understand how their scores are computed or challenge the outcome. This violates the principles of procedural fairness, informed consent, and access to remedy.
AI is also being piloted in the criminal justice system. The Telangana State Police developed an AI model to predict repeat offenders, and other states are exploring predictive policing tools. These tools risk preemptive punishment and reinforce structural biases in criminal data—particularly against minorities and marginalized communities. In welfare delivery, algorithmic systems are used to detect “ghost beneficiaries” or streamline ration distribution. But the Aadhaar-linked Public Distribution System has excluded legitimate beneficiaries due to fingerprint mismatches or system errors. Without a right to explanation or immediate remedy, such exclusions amount to algorithmic disenfranchisement.
Case Laws and Regulatory Insights
Despite the constitutional stakes, Indian jurisprudence on AI is nascent. However, several Supreme Court judgments provide doctrinal foundations that can be extended to algorithmic governance.
1. Justice K.S. Puttaswamy v. Union of India (2017)
The landmark judgment recognized the right to privacy as a fundamental right under Article 21. It emphasized informational self-determination and required that data collection and processing by the state meet the tests of legality, necessity, and proportionality. AI systems that process personal data without consent or accountability would likely fail this test.
2. Maneka Gandhi v. Union of India (1978)
This case expanded Article 21 to include procedural due process, meaning that any law affecting personal liberty must be just, fair, and reasonable. Algorithmic systems that make adverse decisions without notice, hearing, or remedy violate this principle.
3. Anuradha Bhasin v. Union of India (2020)
In the context of internet shutdowns, the Court underscored the importance of proportionality and judicial oversight in restrictions that affect fundamental rights. These principles are directly applicable to the use of AI in surveillance and policing.
4. Aadhaar Judgment (2018)
While upholding the Aadhaar framework, the Court mandated data minimization, purpose limitation, and strong security protocols. These principles are essential for AI systems processing large-scale personal data.
On the regulatory front, India has yet to enact a comprehensive AI law. The Ministry of Electronics and Information Technology (MeitY) released a report titled “Responsible AI for All,” which advocates voluntary ethical principles but lacks binding obligations. The Digital Personal Data Protection Act, 2023, while a positive step, does not include the right to explanation, nor does it prohibit solely automated decision-making. In contrast, the European Union’s AI Act proposes binding rules for “high-risk” AI systems, including transparency, human oversight, and accountability mechanisms. India lacks an equivalent rights-based framework.
Critique
The most glaring flaw in India’s AI regulation is the absence of constitutional lens. While technocratic policy papers mention fairness and non-discrimination, they rarely articulate how these are to be enforced through existing legal rights. This omission allows both public and private actors to deploy AI without constitutional scrutiny. There is also no ex-ante testing of algorithms for bias or disparate impact. In the U.S. and EU, “algorithmic impact assessments” are being explored to evaluate risks before deployment. India lacks such protocols. As a result, structural inequalities coded into data—such as caste, gender, or income disparities—get replicated and amplified without oversight. Another blind spot is accountability. Many AI systems used in public services are developed by private vendors under opaque procurement contracts. When errors occur, it is unclear who is liable—the vendor, the government agency, or the algorithm itself. This creates a legal vacuum and violates the constitutional principle that executive action must be accountable.
The absence of a right to explanation leaves individuals powerless against AI-driven decisions. Whether it’s being denied a government benefit or being profiled by police software, citizens have no legal mechanism to seek reasons, corrections, or redress. Finally, the emphasis on voluntary guidelines over binding law reflects a regulatory mindset that prioritizes innovation over rights. While AI can improve governance and service delivery, without enforceable safeguards, it risks undermining the very constitutional values that governance is meant to uphold.
Conclusion
India is at a constitutional crossroads. As artificial intelligence increasingly shapes critical aspects of governance, welfare, finance, and justice, it is imperative to recognize that technological decisions are legal decisions. Algorithms are not neutral—they encode the values, assumptions, and biases of their designers and training data. Left unchecked, they can systematically violate rights, deepen inequality, and erode democratic accountability.
A comprehensive AI regulation framework must be rooted in constitutional values. This means ensuring transparency, fairness, proportionality, and accountability in every AI deployment that affects rights. It requires not just data protection, but decision protection—safeguards against unjust outcomes, biased systems, and opaque logic.
India should adopt a rights-based regulatory model that includes:
A statutory right to explanation and contestation;
Mandatory algorithmic impact assessments for high-risk applications;
Binding standards for non-discrimination and fairness;
Independent regulatory oversight with technical and legal capacity;
Robust mechanisms for public consultation and judicial review.
In the age of algorithms, constitutional silence is complicity. If India wants to harness AI for social good without sacrificing liberty, it must legislate with clarity, regulate with rigor, and adjudicate with courage.
FAQS
Q1: What is algorithmic injustice?
Algorithmic injustice refers to biased or discriminatory outcomes produced by AI systems, often due to flawed data, model design, or lack of oversight.
Q2: Does Indian law regulate AI?
Not comprehensively. While there are ethical principles and sectoral guidelines, there is no binding AI regulation in India as of 2025.
Q3: Can AI violate constitutional rights?
Yes. AI systems used in public decision-making can violate rights to equality, privacy, and due process if not properly regulated.
Q4: What legal remedies exist against unfair AI decisions?
Currently, remedies are unclear. There is no statutory right to explanation or contestation of AI decisions, though constitutional challenges may be possible.
Q5: What reforms are needed?
India needs a rights-based AI law that includes transparency, bias audits, human oversight, legal accountability, and constitutional compliance.