Site icon Lawful Legal

Supreme Court’s Perspective on Privacy in the Era of AI and Facial Recognition

Author: Mantsha khan, Integral University, Lucknow

Abstract

The establishment of the right to privacy as a fundamental right under Article 21 of the Indian Constitution in Justice K.S. Puttaswamy v. Union of India represents a pivotal moment in the evolution of constitutional law. Nevertheless, the digital epoch has swiftly brought forth novel threats to privacy, especially with the rise of Artificial Intelligence (AI) and Facial Recognition Technology (FRT). These instruments, increasingly employed by law enforcement and governmental bodies in India, provoke significant concerns regarding mass surveillance, algorithmic prejudice, and the diminishing of individual freedoms. This article meticulously analyzes the Supreme Court’s legal interpretations regarding privacy, concentrating on its doctrinal evolution and possible implications for AI-enabled surveillance. It contextualizes Indian advancements within a comparative international perspective, extracting insights from the European Union, United Kingdom, and United States. The examination uncovers a widening chasm between the Court’s delineation of privacy rights and the legislative deficiency concerning AI regulation. Ultimately, the article advocates for a normative framework governing FRT, anchored in constitutional tenets of proportionality, transparency, and accountability.

Keywords

Privacy, Artificial Intelligence, Facial Recognition, Surveillance, Supreme Court, Proportionality, Fundamental Rights, Data Protection


I. Introduction

The constitutional right to privacy in India has undergone substantial transformation over the past fifty years. Initially dismissed in M.P. Sharma v. Satish Chandra and Kharak Singh v. State of U.P., the right was ultimately recognized as a fundamental right in Justice K.S. Puttaswamy v. Union of India (2017). The Supreme Court acknowledged privacy as integral to life and personal liberty under Article 21, underscoring its importance to human dignity and democratic principles.

However, the acknowledgment of privacy coincides with an unparalleled surge in digital technologies that present systemic risks to personal autonomy. Among these, Artificial Intelligence (AI) and Facial Recognition Technology (FRT) have surfaced as formidable tools of governance and law enforcement. Employed in airports, railway stations, and by law enforcement entities in regions such as Delhi and Telangana, FRT facilitates real-time monitoring and identification of individuals. While authorities defend its application based on efficiency and security rationales, detractors caution against the threats posed by mass surveillance, algorithmic bias, and the lack of legal protections.

The Supreme Court’s privacy jurisprudence offers a prospective constitutional safeguard against such threats; however, its practical applicability to AI-driven surveillance has yet to be explored. The primary research inquiry addressed in this article is whether the Court’s doctrinal framework—particularly its focus on proportionality and informational privacy—can sufficiently regulate the usage of AI and FRT within the Indian context.

The importance of this investigation is underscored by its urgency. Worldwide, judicial bodies and legislative entities are contending with the complexities of reconciling privacy rights with advancements in technology. The European Union’s AI Act (2024) enforces rigorous limitations on the utilization of real-time facial recognition technologies, while the UK Court of Appeal, in the case of Bridges v. South Wales Police (2020), pronounced the unregulated implementation of facial recognition technology as unlawful. Conversely, India currently lacks a specific legislative framework governing AI surveillance and instead depends on the constitutional interpretations provided by the Supreme Court.

This scholarly article is organized into six sections. Section II delineates the historical progression of the right to privacy in India, transitioning from early judicial skepticism to its eventual recognition as an essential constitutional right. Section III scrutinizes the challenges introduced by artificial intelligence and facial recognition technology, with a specific focus on their application within India. Section IV assesses the jurisprudential developments of the Supreme Court in response to these challenges, while Section V provides comparative perspectives from international jurisdictions. Section VI conducts a thorough examination of the disjunction between judicial doctrine and existing regulatory frameworks. The article concludes with proposals for establishing a comprehensive legal framework governing AI surveillance, rooted in constitutional principles.

II. Historical Development of the Right to Privacy in India

A. Initial Judicial Dismissal

Initially, the Indian judiciary dismissed the notion of privacy as a constitutional entitlement. In M.P. Sharma v. Satish Chandra (1954), an eight-judge bench determined that the Constitution did not explicitly safeguard a right to privacy, particularly concerning search and seizure authorities under the Code of Criminal Procedure. In a similar vein, in Kharak Singh v. State of U.P. (1962), the Court validated the legitimacy of police surveillance, although Justice Subba Rao’s dissenting remarks astutely underscored the significance of privacy as an integral component of personal liberty as delineated in Article 21. These formative rulings exhibited a restrictive textual interpretation, favoring state interests over individual freedoms.

B. Gradual Acknowledgment of Privacy Rights

Notwithstanding these initial rejections, the Supreme Court gradually began to recognize elements of privacy within the context of personal liberty. In Gobind v. State of M.P. (1975), the Court tentatively acknowledged privacy as a right implicit in Article 21, albeit subject to reasonable constraints. The Court asserted that the parameters of privacy would develop incrementally, depending on individual cases. Subsequent judgments in Malak Singh v. State of Punjab & Haryana (1981) and R. Rajagopal v. State of Tamil Nadu (1994) further broadened the acknowledgement of privacy, especially concerning protection against arbitrary surveillance and safeguarding personal reputation.

C. Towards an Inclusive Right

By the commencement of the 21st century, the notion of privacy became a persistent topic in legal disputes concerning telephone interception, data safeguarding, and personal integrity. In the case of People’s Union for Civil Liberties (PUCL) v. Union of India (1997), the judiciary determined that telephone interception contravened Article 21 unless sanctioned by legislative provision and accompanied by procedural safeguards. Likewise, in Mr. X v. Hospital Z (1998), the judiciary recognized the significance of medical privacy in relation to the disclosure of HIV status. These rulings indicated a transition towards acknowledging privacy as fundamental to human dignity and autonomy, albeit still fragmented across various contexts.

D. The Pivotal Acknowledgment in Justice K.S. Puttaswamy v. Union of India (2017)

The constitutional recognition of privacy was definitively established in Justice K.S. Puttaswamy v. Union of India (2017), wherein a unanimous bench of nine judges proclaimed privacy to be a fundamental right pursuant to Articles 14, 19, and 21.

The Court anchored privacy in three dimensions:

Physical privacy – safeguarding against corporeal intrusions.

Mental privacy – independence in decision-making processes.

Informational privacy – authority over the distribution of personal information.

Most notably, the Court embraced the proportionality principle, mandating that any violation of privacy must meet the criteria of legality, necessity, and proportionality. This construct imposed a constitutional limitation on governmental surveillance, although the Court also recognized that privacy is not absolute and may concede to substantial state interests.

E. Aadhaar and the Constraints of Surveillance (2018)

In the Aadhaar decision (Puttaswamy II), a five-judge panel validated the implementation of Aadhaar for welfare initiatives but invalidated its compulsory application by private entities and for functions such as mobile SIM authentication. The Court reiterated the proportionality standard, underscoring the importance of data minimization and purpose specificity. Detractors, however, contend that the ruling weakened the privacy right by permitting a centralized biometric repository susceptible to surveillance.

F. Contemporary Developments

In the aftermath of the Puttaswamy judgment, the Supreme Court has employed principles of privacy across various domains, including the right to be forgotten, autonomy in medical decision-making, and the rights of LGBTQ+ individuals. Nonetheless, the Court has yet to directly address the complexities introduced by Artificial Intelligence (AI) and Facial Recognition Technology (FRT). Ongoing petitions—such as those disputing the application of facial recognition during protests—suggest that the Court may soon be prompted to broaden its privacy jurisprudence to encompass the digital era.

III. Artificial Intelligence & Facial Recognition: Emerging Challenges

A. Deployment of FRT in India

The implementation of Facial Recognition Technology (FRT) in India has witnessed rapid growth, frequently occurring without legislative endorsement or parliamentary scrutiny. The Delhi Police have utilized FRT for surveillance during protests, identification of suspects, and verification of voter identities. Airports nationwide have adopted the “DigiYatra” system, which employs biometric information for the authentication of passengers. Various states, including Telangana and Tamil Nadu, maintain extensive FRT databases that are integrated with CCTV surveillance systems.

While government officials advocate for FRT on the basis of efficiency, crime deterrence, and national security, the lack of precise regulations raises considerable legal issues. In contrast to the European Union, which has instituted explicit bans on certain high-risk AI technologies, India predominantly relies on executive powers.

B. Privacy and Mass Surveillance Concerns

FRT facilitates the continuous and indiscriminate observation of individuals in public domains. Unlike conventional law enforcement techniques, FRT operates without the necessity for individualized suspicion. This engenders the risk of mass surveillance, wherein the movements, associations, and activities of citizens are tracked in real time. Such practices can stifle freedom of expression, assembly, and political opposition, thus infringing upon the foundational aspects of democratic engagement.

C. Algorithmic Bias and Discrimination

A further issue pertains to the risk of algorithmic bias. Global studies have indicated that FRT systems disproportionately misidentify women, children, and individuals with darker skin tones at significantly higher rates. Within the Indian context, such biases could exacerbate existing structural inequalities by disproportionately affecting marginalized populations. Inaccuracies in biometric data may result in wrongful detentions, denial of essential services, and a deterioration of public trust in governmental institutions.

D. Legal Vacuum in India

Currently, India does not possess a specialized legal framework governing AI or FRT. The recently passed Digital Personal Data Protection Act, 2023, addresses data processing concerns but fails to establish specific protections for AI-driven surveillance methods. The Information Technology Act of 2000 is outdated and inadequately equipped to regulate algorithmic technologies. This legal void positions the judiciary as the primary defender of privacy rights against encroachments by invasive technologies.

IV. The Jurisprudential Framework of the Supreme Court and Its Implications for AI Surveillance

A. The Proportionality Principle as Established in Puttaswamy

In the landmark case of Puttaswamy (2017), the Supreme Court instituted the proportionality framework, mandating that any encroachment on the right to privacy must satisfy three essential criteria:

Legality – the presence of statutory backing for the action undertaken.

Legitimate Objective – the legislation must serve a bona fide state interest.

Proportionality – the action must be essential and the least intrusive alternative available.

When applied to the realm of AI surveillance, this paradigm necessitates legislative sanction for the deployment of Facial Recognition Technology (FRT), stringent restrictions to ensure legitimate applications (such as counter-terrorism), and a strict adherence to data minimization principles. Current methodologies, such as the Delhi Police’s employment of FRT without legislative consent, contravene the legality prerequisite.

B. The Concept of Informational Privacy and Data Safeguarding

The Puttaswamy ruling acknowledged that informational privacy is fundamental to the preservation of human dignity. The Court underscored the necessity for individuals to maintain authority over their personal information. In contrast, FRT indiscriminately gathers biometric data devoid of consent or restrictions on purpose, thereby jeopardizing this foundational principle. The lack of anonymization or data minimization protocols intensifies the potential for misuse and discriminatory profiling.

C. Article 21 and the Constraints of Surveillance

Article 21 safeguards not only physical liberty but also cognitive autonomy. In precedents such as PUCL v. Union of India (1997), the Court mandated procedural protections for telephonic surveillance, acknowledging the perils of unrestrained monitoring. By analogy, these same protective measures should be applied—if not enforced with greater rigor—to FRT, which facilitates continuous observation on an unprecedented scale.

D. Outstanding Petitions and Judicial Inaction

The Supreme Court has yet to render a conclusive decision regarding AI or FRT. Public interest litigations contesting the employment of FRT at demonstrations and airports are currently under consideration. The Court’s reticence generates ambiguity, allowing the state to broaden surveillance capabilities without subjecting them to constitutional examination. The judiciary is tasked with the formidable challenge of evolving privacy jurisprudence to govern technologies that were inconceivable at the time the Constitution was conceived.

V. Comparative Analyses

A. European Union: The AI Act (2024)

The European Union has adopted a proactive methodology by instituting the AI Act (2024), marking the inaugural comprehensive regulation of artificial intelligence globally. This Act classifies AI systems based on their associated risks, enforcing the most stringent regulations on “high-risk” technologies. Real-time remote biometric identification systems, including FRT in public spaces, are subject to rigorous limitations. Exceptions are permitted exclusively for narrowly defined objectives, such as locating missing children or thwarting terrorist acts, and even in such cases, stringent judicial authorization is mandated. The EU framework exemplifies a rights-oriented approach that seeks to harmonize technological innovation with essential civil liberties.

B. United Kingdom: Bridges v. South Wales Police

In the case of Bridges v. South Wales Police (2020), the UK Court of Appeal determined that the implementation of Facial Recognition Technology (FRT) by law enforcement was unlawful due to the lack of definitive legal protections. The Court underscored the necessity for explicit legislative endorsement, compliance with data protection regulations, and the establishment of mechanisms to prevent discriminatory outcomes. This ruling exemplifies the function of judicial bodies in curtailing executive overreach and underscores the significance of transparency in artificial intelligence surveillance.

C. United States: State-Level Restrictions

In the United States, the federal government has yet to establish a cohesive FRT statute; however, numerous states and localities—including San Francisco, Portland, and Boston—have instituted bans or limitations on the police’s utilization of FRT. These limitations reflect escalating apprehensions regarding racial bias, wrongful detentions, and the chilling effects on civil liberties. The fragmented American model stands in contrast to the EU’s centralized regulatory framework but highlights a mutual acknowledgment of the privacy risks posed by FRT.

D. Lessons for India

India can glean three pivotal insights from international practices:

Legislative clarity – paralleling the EU and UK, unequivocal statutory endorsement is imperative.

Judicial oversight – the judiciary must evaluate AI surveillance rigorously to avert misuse.

Rights-centric safeguards – mechanisms for transparency, accountability, and redress must be established institutionally.

VI. Critical Analysis

A. Gap Between Doctrine and Practice

The Supreme Court has delineated a comprehensive framework for privacy, particularly through the proportionality doctrine. Nonetheless, this doctrinal precision has not been effectively actualized. The state persists in employing FRT without legislative authorization, thereby undermining the principle of legality. This disjunction reflects a judicial hesitance to engage with emerging technologies until confronted directly, resulting in a void that the executive readily capitalizes on.

B. Risks of Judicial Overreach

While judicial intervention is crucial, there exists a concomitant risk of courts becoming the primary arbiters of AI regulation in the absence of legislative measures. Constitutional adjudication is inherently ill-equipped to navigate the technical intricacies of AI, including algorithmic bias, data protection, and system evaluations. A solely judicial remedy risks overextending authority and may inhibit innovation while failing to provide comprehensive protections.

C. Balancing State Security and Individual Liberty

The state frequently invokes national security as a rationale for surveillance. However, as articulated in the Puttaswamy case, security concerns cannot supersede constitutional rights unless they are proportionate. The challenge lies in reconciling collective security with individual freedoms. While FRT may bolster law enforcement efficacy, unchecked implementation threatens to cultivate a surveillance state wherein every citizen is regarded as a potential suspect.

D. Necessity for a Multilayered Regulatory Framework

A viable solution necessitates a multilayered regulatory architecture:

The legislature must establish extensive legislation governing AI and Facial Recognition Technology (FRT).

The judiciary must uphold principles of proportionality and procedural protections.

Civil society and the media must promote transparency and accountability. Absent such a framework, India faces the peril of constitutional regression, wherein privacy is acknowledged in theory yet undermined in practice.

VII. Proposals

Enact a Facial Recognition Regulation Act Parliament should implement a specific statute addressing the deployment of FRT. This legislation must delineate acceptable applications, mandate judicial warrants for real-time surveillance, and prohibit indiscriminate mass monitoring.

Enhance the Data Protection Act, 2023 The Digital Personal Data Protection Act should be revised to explicitly govern AI systems, integrating principles of data minimization, algorithmic transparency, and the right to explanation regarding automated decision-making.

Establish Independent Oversight Authorities An AI and Surveillance Oversight Commission should be founded, endowed with the authority to audit FRT systems, address grievances, and enforce compliance.

Mandate Transparency and Accountability Government entities utilizing FRT must disclose impact evaluations, accuracy assessments, and bias analyses. Citizens should possess the right to contest erroneous identifications.

Judicial Scrutiny The Supreme Court must stringently apply the proportionality doctrine to surveillance technologies, ensuring that executive convenience does not infringe upon fundamental rights.

VIII. Conclusion

The acknowledgment of privacy as a fundamental right in Puttaswamy marked a significant milestone in India’s constitutional development. Nevertheless, the emergence of AI and FRT introduces challenges that the Court did not foresee. These technologies pose a risk of normalizing mass surveillance, diminishing autonomy, and perpetuating bias.

The Supreme Court’s jurisprudence offers a formidable framework for reconciling privacy with state interests, yet judicial doctrine alone cannot replace the need for legislative regulation. India must promptly implement a comprehensive legal structure governing AI surveillance, learning from global best practices. Constitutional rights, once acknowledged, must be safeguarded not merely in theory but also in practice. In the era of AI, privacy will serve as the ultimate measure of India’s dedication to democracy and the rule of law.

IX. FAQs

Q1. Does India possess a law regarding facial recognition? No. India currently lacks a specific statute governing FRT, as its use is regulated solely by executive orders and general data protection principles.

Q2. How is the Supreme Court’s privacy jurisprudence applicable to AI? The proportionality test established in Puttaswamy necessitates legality, necessity, and proportionality for any form of intrusion. Present applications of FRT frequently do not satisfy the “legality” requirement due to the absence of parliamentary endorsement.

Q3. Are there global prohibitions on FRT? Yes. The EU AI Act restricts real-time biometric surveillance, and numerous cities in the United States have prohibited police utilization of FRT.

Q4. Can mass AI surveillance ever be warranted? Only under exceptional, narrowly defined circumstances, such as counter-terrorism, with stringent judicial oversight. Indiscriminate surveillance cannot be justified within constitutional frameworks.

X. References

  1. M.P. Sharma v. Satish Chandra, AIR 1954 SC 300.
  1. Kharak Singh v. State of U.P., AIR 1963 SC 1295.
  1. Gobind v. State of M.P., (1975) 2 SCC 148.
  1. Malak Singh v. State of Punjab & Haryana, (1981) 1 SCC 420.
  1. R. Rajagopal v. State of Tamil Nadu, (1994) 6 SCC 632.
  1. People’s Union for Civil Liberties (PUCL) v. Union of India, (1997) 1 SCC 301.
  1. Mr. X v. Hospital Z, (1998) 8 SCC 296.
  1. Justice K.S. Puttaswamy v. Union of India (Privacy Case), (2017) 10 SCC 1.
  1. Justice K.S. Puttaswamy v. Union of India (Aadhaar Case), (2019) 1 SCC 1.
  1. Bridges v. South Wales Police, [2020] EWCA Civ 1058.
  1. European Union, AI Act 2024.
  1. Digital Personal Data Protection Act, 2023 (India).
  1. Internet Freedom Foundation, India’s Surveillance State: A Report on FRT Use (2023).
  1. Amnesty International, Ban the Scan: Facial Recognition and Human Rights (2022).
Exit mobile version