Author: Kshiraj R a student of RV University
To the Point
What happens when your neighbourhood chaiwallah is flagged as a potential criminal not because he’s actually committing a crime but because a machine learning model says so? This article welcomes you to the world of predictive policing, where artificial intelligence meets criminal law, and where sometimes the only thing “artificial” is the fairness. This article probes whether predictive policing aligns with constitutional mandates under Indian law, or whether we’re handing over our civil liberties to a glorified spreadsheet.
Abstract
Predictive policing, the algorithm powered cousin of traditional law enforcement, promises to forecast crimes before they happen. Think of it as Minority Report, minus Tom Cruise and the budget. While data science geeks celebrate its efficiency, constitutional lawyers raise one skeptical eyebrow of the due process?” This article examines whether algorithmic policing upholds fundamental rights like equality, privacy, and liberty or whether it just gives bias a digital makeover.
The Proof
Predictive policing systems are like digital fortune tellers. They analyse past historical crime data to predict future incidents either identifying hotspots or persons of interest. However, this approach may not be as neutral as its source code and the algorithm claims. Historical crime data, as we know reflects human decisions like, whom to arrest, and often, whom to ignore and so on. So, when you teach an algorithm using such data, it’s a bit like training a parrot using politically incorrect jokes obviously it’ll repeat them perfectly, but don’t expect fairness.
In India, cities like Hyderabad and Delhi have rolled out AI-driven surveillance systems. The CCTNS (Crime and Criminal Tracking Network and Systems) allows for centralized data collection, but like everywhere there’s a catch like there are more rules to run a WhatsApp group these days than to run these algorithmic systems. Without legislative backing, we’re entering a digital dystopia where rights are optional, and oversight is a more like a polite suggestion.
Use of Legal Jargon
Let’s put on our legal robes. First, the doctrine of due process under Article 21 of the Indian Constitution insists that no person shall be deprived of life or liberty except according to procedure established by law. Unfortunately, here the talk is “procedure established by machine” doesn’t quite qualify. Second thing is the presumption of innocence while for somme arguably the most romantic notion in criminal law is threatened when algorithms start issuing “pre-crime alerts.” One might also wonder if the algorithm also writes horoscopes on the side.
Then the third is the principle of proportionality under the Puttaswamy framework requires that any infringement on fundamental rights be backed by law, be necessary, and be the least restrictive means to achieve its goal. Predictive policing, some fear that it has the potential of sometimes flaging individuals based on zip codes and hairstyles, may not make the cut. Moreover, the opacity of AI algorithms called “black-box” models means no one, not even their creators, can fully explain their output. It’s like giving a robot a badge and then pretending it has ethics. Finally, Article 14 mandate of non-arbitrariness is tested when marginalized groups are disproportionately flagged. It’s as if the algorithm skipped its constitutional law class entirely.
Case Laws and Legal Analysis
In Justice K.S. Puttaswamy (Retd.) v. Union of India (2017) 10 SCC 1, the Supreme Court famously held that privacy is not just a luxury for the well-to-do it’s a fundamental right. The Court’s est of legality, legitimate state aim, and proportionality now serves as the constitutional gauntlet for any state intrusion. Predictive policing tools that harvest personal data without consent or statutory backing are unlikely to satisfy this test, no matter how sophisticated their algorithms.
In Selvi v. State of Karnataka (2010) 7 SCC 263, the Court ruled that involuntary administration of narcoanalysis, polygraph, and BEAP tests violates Articles 20(3) and 21. If the State cannot forcibly compel answers through chemical or mechanical means, letting an opaque algorithm “guess” criminality without a person’s knowledge or consent raises even more serious red flags both constitutionally and ethically.
In State of Uttar Pradesh v. Rajesh Gautam (2003) 5 SCC 631, the Supreme Court explained that a valid preventive detention order cannot rest on mere suspicion or vibes. The detaining authority must apply its mind to concrete material documents or facts of real probative value and must be fully satisfied that detention is necessary to prevent the individual from acting in a way prejudicial to public order. Although preventive detention does not demand proof beyond reasonable doubt, it does require a rational nexus between a person’s past conduct or more like credible intelligence and the likelihood of future harm. By contrast, many predictive policing systems generate risk scores based on statistical correlations rather than individualized evidence. Because Rajesh Gautam insists on objective grounds and a focused more like a case specific assessment, any algorithm that merely assigns probabilistic danger levels without linking them to admissible facts would fail India’s constitutional standard for preventive detention.
In Wisconsin v. Loomis (881 N.W.2d 749, Wis. 2016), the U.S. Supreme Court of Wisconsin upheld the use of the COMPAS risk-assessment tool at sentencing but issued stern warnings about its opaque logic. The Court made clear that COMPAS can inform but never supplant a judge’s individualized reasoning. The decision is essentially, “We’ll allow it for now, but don’t get cocky.” Given India’s robust emphasis on procedural fairness and transparent reasoning, our judiciary is unlikely to be as accommodating toward black-box systems.
In Navtej Singh Johar v. Union of India (2018) 10 SCC 1, the Court decriminalized consensual same-sex relationships and affirmed that the Constitution does not tolerate state-sanctioned bigotry algorithmic or otherwise. If predictive policing tools disproportionately target minority communities because of biased data, they fail not only technically but morally and constitutionally, running afoul of Articles 14 of the Indian constitution talks about equality, Article 15 which talks about non-discrimination, and Article 21 which talks about right to life and personal liberty.
In comparison to foreign jurisdictions, various countries are tiptoeing through the legal minefield of predictive policing some more cautiously than others. In the United Kingdom, Durham Constabulary’s Harm Assessment Risk Tool (HART) was designed to classify arrestees into High, Medium, or Low risk of reoffending within two years using a random-forest model. During initial trials (2014–2016), HART achieved approximately 90 percent accuracy on held-out test data. However, an independent evaluation in 2017 by Durham’s own analytics team found that when HART was applied to broader, real-world populations, its performance dropped by 5–10 percent and it exhibited disproportionately high false-positive rates in economically deprived postcode areas. Removing postcode as a predictor improved fairness metrics (reducing bias against socioeconomically disadvantaged groups) but lowered overall accuracy from roughly 90 percent to around 82 percent. British legal scholars and human-rights advocates have pointed out that using postcode data can infringe Article 8 of the Human Rights Act (right to privacy) and Article 14 (non-discrimination), since postcode correlates closely with socioeconomic status and minority population density. In short, HART’s advertised accuracy in pilot phases did not fully translate once deployed “live,” and concerns under the U.K. Human Rights Act about covert bias and privacy intrusion became front-and-centre.
In the European Union, the Artificial Intelligence Act (formally adopted June 26, 2024) classifies predictive policing as a “high-risk AI” application and, in some forms, outright prohibits it. Under this regulation, any system that assesses or predicts a person’s risk of committing a criminal offense based solely on profiling or personal traits is banned. At the very least, high-risk law-enforcement AI must undergo rigorous ex ante requirements impact assessments, human-in-the-loop oversight, transparency obligations, continuous fairness audits, and robust documentation. Although most provisions of the EU AI Act will apply beginning January 1, 2026, date might be subject to change but its legislative compass for democratic oversight is already set something Indian jurisdictions currently lack.
In contrast, the United States remains a cautionary tale. The COMPAS algorithm, widely used in sentencing across multiple states, was shown to be racially biased in a 2016 ProPublica investigation: Black defendants were more likely to be flagged “high risk” of reoffending even when they did not reoffend, while white defendants were more likely to be given a “low risk” score despite similar recidivism rates. Despite these findings, courts like in State v. Loomis (881 N.W.2d 749, Wis. 2016) upheld COMPAS’s use albeit with strong caveats about transparency and judicial oversight. In the wake of these developments, American scholars have championed “algorithmic due process” as a new legal doctrine, insisting that any algorithmic decision maker affecting fundamental interests must disclose data inputs, allow challenges to its reasoning, and ensure meaningful human review.
Taken together, these global developments show a growing consensus: predictive policing may remain part of law enforcement toolkits, but only if it is carefully regulated, continually reviewed, and reconciled with constitutional guarantees. Otherwise, tomorrow’s cops might indeed end up being nothing more than code with a badge.
Conclusion
Predictive policing is like a an enthusiast intern eager to help but has no experience and works on an intuition is prone to errors, biases, and occasionally breaching fundamental rights. While it may help allocate police resources more efficiently, reduce time and what not, its unchecked use risks replacing human bias with machine bias equally dangerous, just harder to cross examine. For a country committed to constitutional supremacy, legal accountability, and human dignity, predictive policing must pass through the legislative microscope, not sneak in through the backdoor of technological enthusiasm.
FAQS
Q1. What is predictive policing?
So as discussed in the article predictive policing is the use of algorithms, Artificial Intelligence, and big data analytics by law enforcement to anticipate or more like predict potential criminal activity. It forecasts either like crime prone locations or the individuals likely to commit crimes. While it aims to enhance efficiency and reduce time, while addressing these gaps it also created concerns about reinforcing bias, profiling, and lack of accountability.
Q2. Is predictive policing legal in India?
There’s no specific law authorizing predictive policing in India but there are existing tools like CCTNS or AI based surveillance are used under general police powers and which is done without explicit legal backing. So it most likely, this creates a grey zone that may infringe on fundamental rights, especially under Articles 14, 19, and 21 of the Constitution of India. Courts haven’t yet ruled definitively on this issue, but legality remains questionable without clear legislative sanction.
Q3. What are the main concerns with predictive policing?
Key concerns include: (a) algorithmic bias like where past policing data reinforces existing prejudices; (b) lack of transparency where citizens don’t know how or why they’re flagged; and (c) constitutional violations especially affecting privacy and due process. Without regulation, such systems risk becoming tools of digital discrimination.
Q4. What rights do individuals have if flagged by predictive policing?
Some of the articles in our constitution such as Fundamental rights under Article 21 which is the right to life and personal liberty and Article 14 which talks about equality provide protection. There are even rights where Citizens can approach High Courts or the Supreme Court with the help of a writ petition if they suspect wrongful targeting. However, the opacity of these systems means people often don’t know they’ve been flagged making enforcement difficult.
Q5. What reforms are needed?
India needs a separate dedicated law regulating the use of AI since most of the work fore have included AI tools into their daily life and dependent on it one should ensure its policing, ensuring transparency, auditability, and accountability. There should be even set up of independent oversight bodies or algorithmic impact assessments do do trial and error before bringing the algorithm to main stream, and public awareness mechanisms must be introduced. The end goal should be to modernize policing without compromising fundamental rights.
