Site icon Lawful Legal

Algorithmic Policing in India: A Constitutional Analysis of Surveillance, Bias and Due Process

Author: Gargi Koreti


To the Point

Algorithmic policing represents a paradigm shift in law enforcement, characterized by the pervasive use of automated systems, advanced artificial intelligence (AI), and sophisticated data analytics. These technologies are employed by law enforcement agencies to execute a range of functions, including the predictive modeling of crime hotspots, the automated identification of potential suspects, the mass monitoring of public spaces, and assistance in critical operational and strategic decision-making. In India, this technological deployment is manifest through several high-profile initiatives. Key examples include the pan-India integration platform, the Crime and Criminal Tracking Network and Systems (CCTNS), the deployment of Automated Facial Recognition Systems (AFRS) in cities, the implementation of predictive policing software, and the establishment of extensive, large-scale CCTV surveillance networks.

However, the rapid and widespread deployment of these powerful technologies is occurring within what can only be described as a profound regulatory vacuum. Crucially, there is an absence of any comprehensive legislative framework that specifically governs the ethical use, technical standards, and judicial oversight of algorithmic decision-making when implemented by police authorities. This significant legal gap generates a host of critical constitutional and ethical questions. These include, but are not limited to, the legality and proportionality of mass data collection practices, the requirements for informed consent from citizens whose data is being processed, the technical accuracy and potential for error in complex algorithms, the establishment of clear accountability mechanisms for system errors and human rights violations, and, most fundamentally, the far-reaching impact of these tools on the fundamental rights guaranteed to every citizen.

The Indian Constitution provides the foundational legal guarantees that are now challenged by algorithmic policing. Article 21 enshrines the fundamental right to life and personal liberty, which has been interpreted by the Supreme Court to include the right to privacy. Article 14 ensures equality before the law and equal protection of the laws, a principle directly challenged by biased algorithms. Furthermore, Article 19 guarantees various freedoms, including the freedom of movement and expression, which can be chilled by the perception of ubiquitous surveillance. Algorithmic policing, particularly in the absence of robust legislative and judicial oversight, poses a substantial risk of violating these core constitutional guarantees. It enables disproportionate and often indiscriminate surveillance that can affect entire populations, risks reinforcing and amplifying systemic biases already present in policing data and society, and critically, threatens to bypass the procedural safeguards and checks-and-balances that are essential components of the constitutional requirement of due process. The lack of transparency in how these systems operate further complicates the ability of an individual to challenge an adverse decision, effectively undermining the democratic principle of rule of law.

Use of legal jargon

Algorithmic policing operates at the intersection of constitutional law, administrative law, and emerging techno-legal jurisprudence. The core constitutional doctrines implicated include:
Substantive Due Process under Article 21
Procedural Fairness and Natural Justice

Reasonable Classification and Non-Arbitrariness under Article 14

Proportionality Test for restrictions on fundamental rights

Doctrine of Privacy as a Fundamental Right

Rule of Law and Accountability of State Action

The absence of statutory oversight leads to arbitrariness, overbreadth, and opacity, which are antithetical to constitutional governance. Automated systems functioning as decision-support tools often escape judicial scrutiny due to proprietary algorithms and lack of explainability, resulting in a violation of the principles of audi alteram partem and reasoned decision-making

The Proof

1. Surveillance and the Right to Privacy
The deployment of facial recognition systems and predictive surveillance tools enables continuous and indiscriminate monitoring of individuals in public spaces. These systems collect biometric data without informed consent, purpose limitation, or adequate safeguards.
In Justice K.S. Puttaswamy v. Union of India (2017), the Supreme Court unequivocally recognised the right to privacy as a fundamental right under Article 21. The Court laid down a three-pronged test for any infringement:
Legality
Legitimate state aim
Proportionality
Most algorithmic policing initiatives in India fail the first prong itself due to the absence of a clear legislative mandate. Executive-driven surveillance projects, implemented through administrative orders, lack statutory backing and therefore violate constitutional requirements.
2. Algorithmic Bias and Article 14
Predictive policing algorithms rely on historical crime data. In India, such data often reflects systemic biases against marginalised communities, including Scheduled Castes, Scheduled Tribes, religious minorities, and economically weaker sections. When biased data is fed into algorithms, the resulting outputs reinforce discriminatory policing patterns.
Article 14 prohibits arbitrary state action and mandates equality before law. The Supreme Court has consistently held that arbitrariness is antithetical to equality. Algorithmic systems that disproportionately target specific communities without transparent criteria violate the doctrine of reasonable classification.
3. Due Process and Automated Decision-Making
Algorithmic tools are increasingly used for suspect identification, risk assessment, and crowd monitoring. However, individuals affected by these decisions are rarely informed about:
The existence of such systems
The data used
The logic behind algorithmic outputs
This opacity undermines procedural due process. A person flagged as a suspect by an algorithm has no meaningful opportunity to challenge the decision, thereby violating principles of natural justice.
4. Chilling Effect on Fundamental Freedoms
Mass surveillance through AI-enabled policing creates a chilling effect on freedoms guaranteed under Article 19, including freedom of speech, expression, and peaceful assembly. Constant monitoring discourages dissent and lawful protest, weakening democratic participation.

Abstract

The integration of artificial intelligence and data-driven technologies is rapidly reshaping modern policing worldwide, including in India. Indian law enforcement increasingly employs algorithmic tools such as predictive policing, facial recognition, automated surveillance, and crime-mapping technologies. While these innovations promise greater efficiency, crime reduction, and optimized resource allocation, they simultaneously introduce serious constitutional challenges. Specifically, issues like mass surveillance, inherent algorithmic bias, lack of transparency, and the weakening of procedural safeguards threaten the core tenets of the Indian Constitution. This article undertakes a constitutional analysis of algorithmic policing in India, primarily focusing on the rights to privacy, equality, and due process. By examining current practices, the relevant legal landscape, and established judicial precedents, the paper underscores the critical and immediate need for regulatory frameworks. These safeguards are essential to ensure that technological progress does not come at the expense of constitutional morality.

Case Laws
1. Justice K.S. Puttaswamy v. Union of India (2017)
The “North Star” of Indian privacy law. The Supreme Court held that privacy is a fundamental right. Any algorithmic surveillance must pass the Triple Test:
Legality: Existence of a law.
Need: Legitimate State aim.
Proportionality: Rational nexus between objects and means.
2. Maneka Gandhi v. Union of India (1978)
Established that “procedure established by law” under Article 21 must not be arbitrary. If an algorithm flags a person for “preventive detention” (e.g., under Section 151 CrPC/BNSS), the lack of transparency in that algorithm violates this constitutional “fairness.”
3. State of Kerala v. N.M. Thomas (1976)
While a case on reservations, it established that equality (Article 14) is “proportional” and “substantive.” Using biased datasets that target specific castes/religions via “Predictive Policing” constitutes Indirect Discrimination, as it disproportionately impacts certain groups despite being “facially neutral.”
4. Vinit Kumar v. CBI (2019)
The Bombay High Court emphasized that unauthorized surveillance is a gross violation of privacy. It reinforces that “security of the state” cannot be a blanket excuse to bypass procedural safeguards in digital monitoring.
5. Pending: IFF v. Union of India (The AFRS Challenge)
Currently before the courts, this petition challenges the pan-India implementation of Facial Recognition without a specific legal framework, arguing it creates a “chilling effect” on the Right to Freedom of Assembly (Article 19(1)(b)).

Conclusion

Algorithmic policing in India presents a significant dilemma: it promises modernization and efficiency but simultaneously imperils the nation’s constitutional values. Without specific legislation, robust oversight, and mechanisms for accountability, these algorithmic tools risk devolving into instruments of widespread surveillance and structural discrimination.

Although the Indian Constitution accommodates technological progress, it demands that all state actions, including technologically-aided policing, strictly adhere to constitutional principles. Therefore, any framework for algorithmic policing must be fundamentally built upon the pillars of transparency, explainability, proportionality, and accountability.

To reconcile innovation with constitutionalism, India urgently requires:
Dedicated Legislation: To regulate algorithmic decision-making specifically in law enforcement.
Independent Oversight: To monitor the deployment and impact of these technologies.
Mandatory Audits: To ensure the fairness and accuracy of algorithms.
Data Protection: To safeguard individual privacy.
Judicial Review: To allow for the legal scrutiny of automated systems.

FAQs

Q1. What is algorithmic policing?
Algorithmic policing refers to the use of AI, machine learning, and data analytics by police to predict crime, identify suspects, and conduct surveillance.
Q2. Is algorithmic policing legal in India?
There is no comprehensive law regulating algorithmic policing in India. Most initiatives operate through executive action, raising constitutional concerns.
Q3. How does algorithmic policing affect privacy?
It involves large-scale data collection, including biometric data, often without consent or safeguards, thereby infringing the right to privacy.
Q4. Can algorithms be biased?
Yes. Algorithms trained on biased historical data can reinforce discrimination against marginalised communities.
Q5. What safeguards are needed?
Clear legislation, transparency, accountability, judicial oversight, and compliance with constitutional principles are essential safeguards.

Exit mobile version