The Role of AI Surveillance: Does It Threaten the Right to Privacy?


Author: Anirudh Gupta & Prestige Institute of Management and Research


To the Point
Artificial Intelligence (AI) has radically transformed the nature and extent of surveillance worldwide. With abilities to perform facial recognition, predictive policing, real-time video analytics, and big data profiling, AI has made it possible for the development of surveillance systems that are not merely more successful, yet more invasive than ever before. When these technologies are coupled with state infrastructure, they make it possible to monitor at all times, automated decision-making, and profiling individuals — often unbeknownst to and without their consent.
In India, AI integration in surveillance is seen in initiatives like the Automated Facial Recognition System (AFRS) being used by law enforcement agencies, and the large-scale use of Aadhaar — a biometric identity program that covers more than a billion individuals. The technology is increasingly being employed for tracking citizens in public and private domains, which has raised strong questions about transparency, data privacy, and civil rights.
The constitutional significance of such surveillance is immense. Following the important verdict in Justice K.S. Puttaswamy v. Union of India (2017), the right to privacy has been upheld as a fundamental right under Article 21 of the Constitution of India. The judgment laid down a strong framework for assessing intrusions into privacy by the State, which insisted on the principles of legality, necessity, and proportionality.
However, the deployment of AI-based surveillance technologies often lacks a clear statutory basis, judicial oversight, or adequate safeguards against misuse. This creates a tension between the State’s interest in ensuring national security and public safety, and the individual’s right to privacy, autonomy, and dignity.
This piece is a critical examination of how unregulated deployment of AI for surveillance can undermine constitutional safeguards and democratic oversight. It examines the existing legal environment, pivotal judicial statements, and the shortcomings of present data protection laws such as the Digital Personal Data Protection (DPDP) Act, 2023. Finally, it advocates a rights-based response to surveillance policy ensuring AI technologies are utilized in a fashion that is legal, transparent, and sensitive to individual rights.


Abstract
The growing use of Artificial Intelligence (AI) in surveillance systems is a remarkable shift in the way states observe, regulate, and engage with their citizens. In India, facial recognition systems, biometric identification, and data analytics — especially those that are part of the Aadhaar architecture — are being used by both state and private agents to improve security and the delivery of services. Yet, this technological transformation creates multifaceted legal and ethical issues, specifically in relation to the right of privacy, a constitutional right enshrined under Article 21 of the Indian Constitution.
This article critically analyzes the legal consequences of AI-based surveillance in the Indian context, specifically in relation to the application of facial recognition technology (FRT) and Aadhaar-based identification systems. It questions how such technologies, when applied in the absence of a robust statutory regime or adequate procedural protections, might result in mass surveillance, profiling, and exclusion.
The central point of the discussion is the Supreme Court’s important decision in Justice K.S.Puttaswamy v. Union of India (2017), in which a nine-judge panel sitting unanimously proclaimed privacy as a basic right inherent in life and personal liberty. The article discusses how this judgment established a three-fold test — legality, necessity, and proportionality — which any invasion of privacy has to meet. Despite this jurisprudential framework, the practical implementation of AI surveillance often escapes scrutiny due to a lack of transparency, oversight, and enforceable accountability mechanisms.
Additionally, the article analyzes the Digital Personal Data Protection Act, 2023, and measures how it fares in terms of regulating state surveillance and data processing using AI. Although the Act provides novel concepts such as data fiduciaries, consent, and redressal of grievance, it is inadequate in dealing with public authority surveillance, thereby not achieving the constitutional requirements under Puttaswamy.
Through policy analysis and critique of the law, the article identifies the imperative for radical legislative reform to bring AI surveillance practice into conformity with constitutional values and democratic norms. It advocates for a rights-oriented, privacy-friendly approach to regulating AI that guarantees technological advancement is not at the expense of fundamental rights.


Use of Legal Jargon
The application of Artificial Intelligence (AI) in surveillance has to be tested in terms of constitutional jurisprudence, and that is the doctrine of proportionality as explained by the Supreme Court in Justice K.S. Puttaswamy v. Union of India (2017). The ruling prescribes that any action by a State encroaching on the constitutional right to privacy must pass a tripartite test: (i) there must be a law empowering the action; (ii) there must be a legitimate State objective established; and (iii) the means employed must be proportionate, i.e., the least restrictive to serve the objective in question.
Here, AI-based surveillance technologies like Facial Recognition Technology (FRT), biometric identification, and predictive policing systems—particularly when linked with Aadhaar or other national databases—most obviously come within the meaning of “processing of personal data” (DPDP) Act of 2023. Such processing concerns sensitive personal information such as biometric and demographic information, which undeniably attracts the individual’s “informational autonomy”—a branch of privacy that makes an individual capable of exercising control over the collection, usage, and sharing of their personal information.
The lack of enabling legislation that specifically governs AI-surveillance heightens Article 21 concerns. Without well-defined legal standards, facial recognition mass surveillance can easily become a form of “function creep” when information gathered for one lawful purpose can be reused for unrelated and perhaps illegal purposes. This not only violates the principle of purpose limitation integrated into data protection principles but also invites disproportionate and discriminatory targeting, and especially so of vulnerable communities.
Moreover, AI systems, especially those employing black-box methods, work with a lack of explainability or transparency, contravening the canons of accountability in administrative law. Individuals targeted by such transparent-free surveillance mechanisms may be completely denied access to the reasons behind profiling or monitoring, thus violating “decisional privacy,” another aspect of Article 21 covering personal choice and autonomy.
Additionally, the DPDP Act, 2023, although giving a legislative platform for protection of personal data, extends broad exemptions to State instrumentalities in provisions concerning national security and public interest, whereby surveillance programs may be given immunity from judicial scrutiny. This increases the chances of arbitrary State action, infringing the constitutional promise of equality under Article 14 and procedural justice under Article 21.
In summary, in the absence of a strong, transparent, and narrowly defined legislative framework reflecting the constitutional privacy principles, AI-based surveillance mechanisms threaten the rule of law, due process, and personal freedom seriously. Ongoing deployment of such technologies without proper checks could make normal a surveillance state contrary to India’s constitutional and democratic tradition.


The Proof
The increasing application of Facial Recognition Technologies (FRT) in India is a classic example of how AI-based surveillance is being undertaken without adequate legal protections, institutional oversight, or accountability frameworks. An example that stands out is the Automated Facial Recognition System (AFRS) launched by the Delhi Police, which has been used to identify people during public protests, as in the anti-CAA (Citizenship Amendment Act) protests. The use of AFRS, frequently in public places without forewarning or permission, violates the privacy regime established by the Supreme Court in Justice K.S. Puttaswamy v. Union of India (2017), where it was asserted that privacy is not a luxury for a select few but a fundamental right inherent to human dignity and freedom.
The absence of a specific legislative authorization permitting the use of AFRS is a cause for concern under the “legality” requirement of the doctrine of proportionality. There is no overarching law governing the collection, storage, retention, and processing of facial recognition information by law enforcement. Therefore, citizens are subjected to pervasive and indiscriminate surveillance without statutory protections like time-limited deletion of data, independent monitoring, or legal remedies.
The Aadhaar scheme—India’s biometric identity system—has also been at the center of surveillance-related privacy controversies. While the Supreme Court in Justice K.S. Puttaswamy (Aadhaar) v. Union of India (2018) declared the constitutional validity of the Aadhaar system for subsidized delivery but for constitutionally protected purposes, it invalidated Section 57 of the Aadhaar Act, which enabled private companies to require Aadhaar information for authentication. The Court noted that such extensive access infringed on the right to privacy and allowed for function creep, where data obtained for one use is applied to unrelated purposes.
However, in reality, Aadhaar remains to be embedded in various fields such as policing, banking, telecom services, and even school enrollment, most often without actual user consent or autonomous judicial review. For example, law enforcement authorities in some states have allegedly utilized Aadhaar-based biometric authentication in tracking and profiling suspects and detainees—means that could constitute unauthorized and disproportionate monitoring.
An additional concern stems from the obscure, algorithmic character of AI systems applied to surveillance. These systems tend to exist as “black boxes”, in which the supporting decision-making rationale or the threat of errors is not disclosed to the public. That obscurity creates a lack of accountability, particularly when AI-driven decisions carry material impact on individuals. For instance, facial recognition false positives can result in wrongful detentions, while algorithmic bias can disproportionately affect marginalized groups, perpetuating already existing patterns of exclusion and discrimination.
Most critically, citizens have no effective means to audit, contest, or correct AI-driven surveillance rulings. None of the data impact assessments, algorithmic explainability, or human-in-the-loop decision-making principles regarded as best practices in democratic data stewardship are required. Without these mechanisms, the right to privacy rings hollow because people are not only unaware of being monitored but also unable to protest it.
Taken together, these factors show how the use of AI surveillance technologies in India is absent in constitutional standards. In the absence of a sound legal framework that places specific limitations, makes it transparent, and assures effective remedies, these technologies risk turning India into a surveillance state, depriving it of the principles of democracy, freedom, and constitutionalism.


Case Laws
1. Justice K.S. Puttaswamy v. Union of India (2017) 10 SCC 1
The right to privacy has been held by the Supreme Court to be a fundamental right under Article 21. The court established a three-fold test — legality, necessity, and proportionality — for any violation of privacy. This test acts as the foundation for determining the constitutionality of AI surveillance systems.
2. Justice K.S. Puttaswamy (Aadhaar) v. Union of India (2018) 1 SCC 809
The Court affirmed the constitutional legitimacy of Aadhaar but read down some provisions in order to avoid privacy violations, particularly Section 57. It underlined the importance of purpose limitation and data minimization — principles immediately applicable to AI monitoring.
3. Selvi v. State of Karnataka (2010) 7 SCC 263
The Court held that involuntary use of such methods as narco-analysis is a violation of individual liberty and mental privacy. This is the principle reiterating that even psychological intrusions by the State can violate privacy.
4. Maneka Gandhi v. Union of India (1978) 1 SCC 248
Broadened the meaning of Article 21, demanding that any law that infringes on personal freedom should be just, fair, and reasonable. AI surveillance without legal support or due process could fall short of this test.

Conclusion
The emergence of AI-based surveillance represents a revolutionary turn in the confluence of technology, state policy, and basic rights. Though Artificial Intelligence promises irreducible advantages in augmenting state capacity in upholding law and order, its uncontrolled use—especially in the guise of face recognition platforms and biometric-linked repositories such as Aadhaar—runs serious risks to personal privacy, freedom, and democratic rights.
As the analysis here establishes, India has no overarching legal regime that properly governs the deployment of AI in surveillance at present. The Supreme Court’s such classic judgments of Justice K.S. Puttaswamy (2017) and Puttaswamy (Aadhaar) (2018) have established a strong constitutional basis for privacy, which includes tests of legality, necessity, and proportionality. Yet on the ground, AI surveillance tends to fall outside this judicially prescribed oversight because of ambiguous policy guidance, broad executive prerogative, and inadequate control.
The deployment of opaque, unaccountable, and frequently biased AI systems—lacking due process, informed consent, or avenues for redress—undermines not only the right to privacy but also public faith in democratic institutions. The lack of accountability mechanisms, including algorithmic auditing, data protection impact assessments, and citizens’ enforcable rights, makes available a surveillance architecture incompatible with the rule of law.
Looking ahead, it is necessary that India embraces a rights-based and privacy-oriented legal framework regarding AI surveillance. This involves:
• Enacting a specific legislation to govern AI deployment in policing and public administration.
• Factoring judicial or independent supervision of surveillance initiatives.
• Requiring algorithmic openness, auditability, and explainability.
• Safeguarding vulnerable groups against algorithmic bias.
• Calibrating the Digital Personal Data Protection Act, 2023, to incorporate strong protections against State surveillance misuse.
In short, the AI promise cannot be at the expense of constitutional freedoms. A restrained reconfiguration of national security priorities in harmony with individual rights is the way forward. As India goes ahead and absorbs AI technologies, it should do so within a system that protects dignity, freedom, and accountability — the cornerstone of any constitutional democracy.


FAQs
1. What is AI surveillance?
AI surveillance refers to the use of artificial intelligence technologies such as facial recognition, biometric identification, and predictive analytics for monitoring individuals or populations.
2. How does AI surveillance affect the right to privacy?
It may lead to mass surveillance, profiling, and data misuse, thereby infringing on individuals’ right to informational and decisional privacy protected under Article 21.
3. Is Aadhaar an example of AI-based surveillance?
While not an AI tool per se, Aadhaar forms the backbone of several AI-driven systems and raises surveillance concerns due to its biometric database and integration with various services.
4. What legal safeguards exist in India against AI surveillance?
The right to privacy judgment (Puttaswamy) and the DPDP Act, 2023 provide partial safeguards, but India lacks a comprehensive law regulating AI surveillance specifically.
5. What reforms are needed?
India needs legislation that clearly defines permissible uses of AI in surveillance, mandates judicial oversight, ensures transparency in algorithmic decision-making, and protects citizen rights through grievance redressal mechanisms.

Leave a Reply

Your email address will not be published. Required fields are marked *