Algorithmic Discrimination and the Indian Constitution: A New Frontier for Article 14

Author: Advika Dwivedi, Christ Academy Institute of Law

Abstract


The rapid integration of Artificial Intelligence (AI) into both public governance and private enterprise has introduced a novel yet under-explored threat to the constitutional guarantee of equality under Article 14 of the Indian Constitution: algorithmic discrimination. AI systems—trained on large datasets—often inherit and amplify historical biases embedded in data, leading to decisions that, while appearing neutral, disproportionately disadvantage marginalized groups. From facial recognition technologies in policing to automated hiring tools in the corporate sector, the seemingly objective logic of algorithms can mask profound inequalities, thereby raising urgent constitutional questions.

This article critically examines the implications of such bias through the lens of Article 14, which enshrines the principles of equality before law and equal protection of laws. It argues that algorithmic discrimination constitutes a new form of indirect or systemic discrimination, necessitating an evolved judicial interpretation of Article 14 that accommodates technological realities. The paper delves into global jurisprudence and policy responses—particularly from the European Union and the United States—to contextualize the emerging legal challenges in India.

Furthermore, the article explores the Indian judiciary’s current preparedness to deal with algorithmic decision-making and its discriminatory outcomes, pointing to a gap in legal frameworks and precedent. It proposes a doctrinal expansion of Article 14 to incorporate algorithmic accountability, advocating for standards such as transparency, explainability, and fairness in automated systems. The article also recommends a constitutional due process requirement for algorithmic systems deployed by the State, particularly in areas affecting civil liberties, social benefits, and criminal justice.

By bridging the fields of constitutional law and technology policy, this article seeks to contribute to the growing discourse on digital constitutionalism in India. It concludes by calling for proactive judicial recognition and legislative intervention to safeguard the right to equality in the age of algorithms, warning that inaction risks normalizing a new, opaque form of digital discrimination.

Introduction


The advent of Artificial Intelligence (AI) and machine learning technologies has revolutionized numerous sectors, offering efficiency, consistency, and scalability in decision-making. From predictive policing and welfare allocation to employment screening and financial creditworthiness assessments, algorithmic systems are increasingly influencing decisions that have a significant impact on individual rights. However, the reliance on historical datasets and opaque algorithmic models has introduced an insidious threat—algorithmic discrimination. Unlike traditional forms of discrimination that may stem from overt prejudice, algorithmic discrimination is often covert, hidden within layers of statistical modeling and complex coding. This renders it harder to detect, regulate, and contest, raising serious questions of constitutionality and fairness.

This article critically examines the implications of algorithmic discrimination through the lens of Article 14 of the Indian Constitution, which guarantees the right to equality before the law and equal protection of laws. It investigates the doctrinal elasticity of Article 14 to accommodate new-age discrimination stemming from technological processes. Drawing on comparative jurisprudence and constitutional principles, it argues that the Indian judiciary and legislature must recalibrate their frameworks to address the challenges posed by algorithmic decision-making.

Understanding Algorithmic Discrimination: A New Age Injustice
Algorithmic Discrimination refers to the unjust or biased treatment of individuals or groups based on the outputs of automated systems. It is the prejudicial outcomes produced by AI systems, which often stem from biases embedded in training data, flawed assumptions in algorithm design, or discriminatory implementation practices. These biases may be based on race, gender, caste, religion, language, geography, or other protected attributes. Importantly, such bias may not always be intentional but can nonetheless result in disproportionate harms to historically marginalized groups. These outputs are generated by algorithms, which use data sets to make predictions, decisions, or classifications. However, if the data used to train the algorithm is itself biased or discriminatory, the resulting algorithmic decisions will reflect those biases. Such discrimination may not be overt or intentional but can be as harmful as traditional forms of discrimination.

Consequently, predictive tools may disproportionately target communities that were historically subjected to excessive law enforcement scrutiny, thus reinforcing existing stereotypes. Similarly, in recruitment, algorithms trained on prior hiring data may prefer candidates who resemble past hires, systematically excluding women, persons with disabilities, or those from underrepresented castes or regions. Such outcomes, while technically neutral, replicate structural discrimination under the guise of efficiency and objectivity.

What makes algorithmic discrimination particularly problematic is its opacity. AI models, especially those involving deep learning or neural networks, often function as “black boxes”—producing outputs without intelligible reasoning or transparent criteria. This obscures accountability and limits an individual’s ability to contest decisions that adversely affect them, making such practices susceptible to legal and constitutional scrutiny. These biases, when codified into decision-making processes by AI, may lead to discrimination against historically marginalized groups, violating their right to equality under Article 14.

Article 14 and the Constitutional Promise of Equality

Article 14 of the Indian Constitution lays down the foundational principle of equality, stating that “the State shall not deny to any person equality before the law or the equal protection of the laws within the territory of India.” The jurisprudence developed under Article 14 encompasses both formal equality—the uniform application of laws to all persons—and substantive equality, which recognizes that differential treatment is sometimes necessary to address existing inequalities.

The classic test under Article 14, developed in State of West Bengal v. Anwar Ali Sarkar (AIR 1952 SC 75), requires that any classification made by the State must be based on an intelligible differentia and must bear a rational nexus to the object sought to be achieved. This doctrine of reasonable classification, while foundational, has evolved over time.

In E.P. Royappa v. State of Tamil Nadu (1974) 4 SCC 3, Justice Bhagwati introduced a broader dimension to Article 14 by stating that “equality is antithetical to arbitrariness.” This marked a paradigm shift from mere classification analysis to an enquiry into whether the State’s action is arbitrary, capricious, or unreasonable. The doctrine of arbitrariness was further expanded in Maneka Gandhi v. Union of India (1978) 1 SCC 248, where the Court held that any procedure that is arbitrary or unfair is violative of Article 14.

Thus, Indian constitutional law has moved beyond formalistic equality towards a more substantive, dynamic understanding. The question then arises: can algorithmic opacity and bias be construed as arbitrariness within the meaning of Article 14?

Algorithmic Arbitrariness as Constitutional Unfairness
The principle of non-arbitrariness under Article 14 offers a potent tool for scrutinizing algorithmic decision-making. When the State adopts AI systems to make determinations—be it for welfare eligibility, policing, or public sector hiring—those decisions must adhere to principles of fairness, transparency, and reasonableness. If the algorithmic process is opaque, unexplainable, or systematically biased, it fails the test of procedural fairness and becomes arbitrary.

Moreover, algorithmic systems that disproportionately harm disadvantaged groups may amount to indirect discrimination. Though Indian equality jurisprudence has not explicitly recognized indirect discrimination, the spirit of Article 14 is broad enough to accommodate it. In other jurisdictions, such as the United Kingdom and the European Union, indirect discrimination refers to seemingly neutral policies or practices that have a disproportionate adverse effect on members of a protected group, unless justified by a legitimate aim pursued in a proportionate manner.

A classic Indian example could be an AI-based teacher recruitment tool that gives preference to applicants based on language proficiency measured through a dataset predominantly featuring urban dialects. This may systematically disadvantage rural candidates, thereby violating the constitutional principle of substantive equality.
Thus, algorithmic decision-making can amount to both arbitrariness and indirect discrimination, necessitating constitutional redress under Article 14.


Comparative Jurisprudence and Global Trends
Comparative analysis offers valuable insights into how other jurisdictions are addressing algorithmic bias. The European Union’s proposed Artificial Intelligence Act classifies AI systems into different risk categories and mandates strict compliance obligations for “high-risk” systems, including transparency, human oversight, and non-discrimination. Similarly, the General Data Protection Regulation (GDPR) under Article 22 provides individuals the right not to be subject to decisions based solely on automated processing.

In the United States, the use of algorithms in employment and credit decisions is subject to scrutiny under anti-discrimination statutes such as Title VII of the Civil Rights Act and the Equal Credit Opportunity Act. Several enforcement agencies have issued guidance to ensure algorithmic compliance with civil rights laws.

In the UK, courts have invoked the Equality Act, 2010 to examine the discriminatory impact of automated decision-making. The case of R (Bridges) v. South Wales Police [2020] EWCA Civ 1058, where the Court of Appeal held that the use of facial recognition technology without sufficient safeguards violated privacy and equality rights, stands as a landmark example.

Case Studies and Potential Discrimination in Algorithmic Decision-Making
Algorithmic systems are increasingly being adopted across various sectors, from hiring practices to criminal justice, healthcare, and finance. However, biases in these algorithms have resulted in significant instances of discrimination. Below are key case studies highlighting such discriminatory practices and the impact of algorithmic bias:

1. Case Study: Discriminatory Hiring Algorithms (Amazon’s AI Recruiting Tool)
One of the most cited examples of algorithmic bias in hiring occurred with Amazon’s AI recruiting tool. In 2018, it was revealed that Amazon developed an AI system designed to assist in reviewing job applications. However, the tool was found to be biased against female candidates.

Discriminatory Outcome: The algorithm was trained using data from resumes submitted to Amazon over a ten-year period. Since the majority of applicants were male, the system learned to favor resumes that used language or terminology more commonly associated with male-dominated roles. As a result, the tool was less likely to recommend female candidates for technical roles, especially those involving leadership positions.

Constitutional Implications: If such discriminatory hiring practices were conducted by a government entity or a company providing public services in India, they could be challenged under Article 14 of the Indian Constitution for violating the right to equality. This case highlights how algorithmic systems can perpetuate gender biases in a way that violates both formal and substantive equality.

2. Case Study: Predictive Policing and Racial Bias (PredPol in the U.S.)
Predictive policing algorithms, such as PredPol, have been employed by police departments in the U.S. to predict where crimes are likely to occur and to allocate resources more efficiently. However, these algorithms have faced significant criticism for racial bias.

Discriminatory Outcome: PredPol’s predictions were found to disproportionately target African American and Latino communities in certain areas, despite evidence that crime rates in these communities were not necessarily higher than in others. The algorithm relied on historical crime data, which itself reflected biases in police reporting, arrests, and convictions, often leading to over-policing of minority neighborhoods.

Constitutional Implications: The use of biased predictive policing algorithms could be challenged in the U.S. under the Equal Protection Clause of the 14th Amendment, which prohibits the government from discriminating based on race. In India, such practices could be scrutinized under Article 14 for being arbitrary or discriminatory, especially if used in government-sanctioned law enforcement agencies.

This case highlights how AI systems can inadvertently perpetuate discriminatory patterns that already exist in society, leading to racial discrimination and violating the right to equality and non-arbitrariness.

3. Case Study: Discriminatory Credit Scoring Systems (AI in Finance)
In the finance industry, AI-based credit scoring systems are being increasingly used by banks and lending institutions to assess the creditworthiness of individuals. However, these systems have been found to replicate biases against minorities, low-income individuals, and certain geographic groups.

Discriminatory Outcome: A study by ProPublica found that AI-based algorithms used in credit scoring systems tended to penalize people from lower-income backgrounds and minority communities, despite their creditworthiness being similar to that of other individuals. These biases were often due to historical data reflecting systemic economic inequalities and lending practices that have historically discriminated against these groups.

Constitutional Implications: In India, these systems could be challenged on the grounds of discriminatory treatment under Article 14. The use of biased data, especially when it has disparate impacts on marginalized communities, could be viewed as arbitrary and unjust. Moreover, it could also be seen as a violation of substantive equality, where individuals from disadvantaged backgrounds are unfairly penalized.

4. Case Study: Facial Recognition Technology (Clearview AI)
Facial recognition technology has been increasingly adopted for security, surveillance, and identification purposes. However, its widespread use has raised concerns about racial bias and privacy violations.

Discriminatory Outcome: One of the key issues with facial recognition systems is their lower accuracy rate when identifying individuals with darker skin tones, particularly women of color. For example, a study by MIT Media Lab in 2018 found that facial recognition systems were less accurate in identifying Black women compared to white men, with error rates up to 34% higher for darker-skinned individuals.

Constitutional Implications: The use of biased facial recognition technology could violate both Article 14 and Article 21 of the Indian Constitution, which guarantees the right to life and personal liberty. In particular, it could lead to arbitrary or discriminatory treatment of individuals based on their race or gender, and it may also infringe on privacy rights. The potential for misidentification could lead to wrongful arrests or unnecessary surveillance, which would disproportionately affect marginalized communities.

The disparate impact of biased facial recognition systems could be challenged under constitutional law, arguing that such systems violate the fundamental rights to equality and liberty guaranteed by the Constitution.

5. Case Study: Healthcare Algorithms and Racial Disparities (Optum’s Health Prediction System)
In 2019, it was discovered that Optum, a health services company, had developed an algorithm to predict which patients would benefit most from extra medical care. However, the system was found to underestimate the healthcare needs of Black patients.

Discriminatory Outcome: The algorithm relied on healthcare spending as a predictor of health needs. Since Black patients tend to have less access to healthcare and thus lower overall medical spending, the system misidentified them as less in need of care compared to white patients with similar health conditions. This resulted in a disparity in the allocation of medical resources, particularly in disadvantaged communities.

Constitutional Implications: If such AI systems are used in India by government healthcare providers, they could be challenged under Article 14 for discriminating against Black or other marginalized communities based on their historical or socio-economic status. The use of such systems may be deemed arbitrary, as it indirectly leads to unequal treatment based on racial or socio-economic factors, undermining the constitutional promise of equality and non-discrimination.

Conclusion


Algorithmic discrimination presents a formidable challenge to the constitutional guarantee of equality. In the absence of intentional bias, these systems can still produce discriminatory outcomes through historical data and structural design flaws. The Indian constitutional framework, particularly Article 14, is sufficiently broad and dynamic to respond to this new frontier.

Leave a Reply

Your email address will not be published. Required fields are marked *

Open chat
Hello 👋
Can we help you?