Site icon Lawful Legal

Regulating Artificial Intelligence: Legal Challenges to Accountability, Privacy, and Human Rights


Author: Amarpreet Kaur, University of Edinburgh

To the Point


Artificial Intelligence (AI) has rapidly evolved from an experimental and largely academic concept into a central component of contemporary governance, commerce, and public administration. AI-driven systems are now routinely deployed across a wide range of sectors, including law enforcement, financial services, healthcare delivery, recruitment and employment screening, welfare administration, border control, and national security. These technologies increasingly shape decisions that directly affect individuals’ rights, liberties, and access to social and economic opportunities.
Unlike traditional decision-making tools, AI systems operate through complex computational models that analyse vast datasets and generate outputs with minimal human intervention. While this automation promises efficiency, consistency, and cost reduction, it also raises serious legal and ethical concerns. Automated decision-making often functions through opaque algorithms that lack transparency and explainability, making it difficult for affected individuals to understand how decisions are made or to challenge adverse outcomes effectively.
When AI systems produce discriminatory, inaccurate, or unjust results, identifying responsibility becomes legally complex. Traditional legal frameworks are largely premised on human agency, intent, and direct causation. These frameworks struggle to accommodate autonomous or semi-autonomous technologies where harm may arise from a combination of flawed data, algorithmic design, and institutional oversight failures rather than deliberate human wrongdoing.
The unregulated or poorly regulated use of AI therefore risks undermining fundamental rights, including the right to privacy, equality before the law, and procedural fairness. As both public authorities and private corporations increasingly rely on algorithmic systems to inform or determine decision-making, the need for a robust and coherent legal framework governing AI has become urgent. This article examines the legal challenges posed by Artificial Intelligence, focusing on accountability, privacy, and human rights, and analyses emerging judicial and regulatory responses aimed at addressing these concerns.

Abstract
The integration of Artificial Intelligence into decision-making processes presents unprecedented challenges for existing legal systems. AI technologies increasingly shape outcomes in areas traditionally governed by law, including criminal justice, employment, financial regulation, and public administration. This expansion raises significant concerns related to accountability, transparency, discrimination, and data protection.
This article explores the legal implications of AI deployment through a human rights and public law lens. It examines the limitations of existing legal frameworks in addressing algorithmic decision-making and analyses judicial responses and regulatory developments, particularly within the European Union and United Kingdom contexts. By assessing relevant case law and emerging regulatory models, the article argues that a rights-based legal approach is essential to ensure that technological innovation does not erode fundamental legal protections or constitutional values.

Use of Legal Jargon
The regulation of Artificial Intelligence engages multiple branches of law, including constitutional law, administrative law, data protection law, and international human rights jurisprudence. Central to AI governance is the doctrine of accountability, which requires that legal responsibility for harm caused by automated systems be clearly identifiable, attributable, and enforceable. However, the opacity and technical complexity of algorithmic processes challenge traditional legal concepts of fault, causation, and liability.
AI systems also implicate the right to privacy, particularly where personal data is processed through large-scale surveillance technologies, biometric identification systems, profiling mechanisms, and predictive analytics. Core legal principles such as proportionality, necessity, purpose limitation, and informed consent are strained when applied to machine learning models that continuously evolve and derive insights from extensive datasets, often without explicit user awareness.
Additionally, algorithmic decision-making raises serious concerns under equality and non-discrimination law. Biased or unrepresentative training data may result in indirect discrimination against protected groups, reinforcing structural inequalities rather than neutralising them. Such outcomes challenge the principle of substantive equality and undermine legal commitments to fairness and inclusion.
From a public law perspective, the deployment of AI by state authorities engages principles of legality, procedural fairness, and due process. Automated decisions affecting rights or entitlements must comply with requirements of transparency, reasoned decision-making, and access to effective remedies. These concerns necessitate a regulatory framework capable of integrating technological governance within established constitutional and human rights standards.

The Proof
Empirical evidence increasingly demonstrates that AI systems are neither neutral nor error-free. Numerous studies have identified racial and gender bias in facial recognition technologies, discriminatory outcomes in automated recruitment and credit-scoring tools, and inaccuracies in predictive policing algorithms. These failures often stem from biased datasets, flawed algorithmic design, or insufficient human oversight.
The consequences of such failures can be severe. Individuals may be subjected to wrongful surveillance, excluded from employment opportunities, denied access to essential services, or disproportionately targeted by law enforcement. Importantly, these harms often occur without clear avenues for redress, as affected individuals may be unaware that an automated system influenced the decision in question.
One of the principal challenges in regulating AI lies in the asymmetry of knowledge and power between technology developers and legal institutions. Courts and regulators frequently lack the technical expertise necessary to scrutinise algorithmic systems effectively. Furthermore, many AI technologies are developed and deployed by private corporations operating across multiple jurisdictions, complicating issues of regulatory competence, enforcement, and accountability.
Recognising these risks, several jurisdictions have begun to adopt precautionary approaches to AI regulation. Emerging regulatory initiatives increasingly emphasise transparency obligations, mandatory human oversight, and risk-based classifications of AI systems based on their potential impact on fundamental rights. However, enforcement mechanisms remain underdeveloped, and individuals affected by algorithmic harm frequently lack effective legal remedies. This gap underscores the need for stronger judicial engagement and comprehensive regulatory frameworks.

Case Laws
Loomis v Wisconsin (2016)
In Loomis v Wisconsin, the Wisconsin Supreme Court examined the use of a proprietary risk assessment algorithm (COMPAS) in criminal sentencing proceedings. The Court permitted the use of the algorithm as a supplementary tool but expressly acknowledged serious concerns regarding transparency and procedural fairness. The defendant was unable to access the algorithm’s source code or understand the precise factors influencing the risk assessment, thereby limiting his ability to challenge the decision effectively.
Although the Court declined to prohibit the use of algorithmic tools altogether, it cautioned against their determinative use and emphasised that automated assessments must not replace judicial discretion. The case illustrates the inherent tension between technological efficiency and due process, highlighting the legal risks posed by opaque AI systems in contexts where liberty and fundamental rights are at stake.

R (Bridges) v Chief Constable of South Wales Police (2020)
In this landmark decision, the UK Court of Appeal considered the legality of the police’s use of live facial recognition technology in public spaces. The Court held that the deployment of such technology was unlawful due to insufficient legal safeguards, inadequate assessment of privacy risks, and failure to comply with data protection requirements.
The judgment underscored the importance of clear statutory authorisation, transparency, and proportionality when deploying AI technologies that interfere with fundamental rights. Significantly, the Court emphasised that the absence of adequate guidance on how discretion was exercised rendered the use of facial recognition incompatible with the rule of law. This case represents a crucial judicial acknowledgement of the risks posed by AI-driven surveillance in law enforcement.

S. and Marper v United Kingdom (2008)
The European Court of Human Rights held that the indefinite retention of biometric data, including DNA profiles and fingerprints, violated the right to respect for private life under Article 8 of the European Convention on Human Rights. Although the case predates contemporary AI technologies, its principles remain highly relevant.
The Court emphasised that blanket and indiscriminate data retention policies fail to satisfy the requirements of necessity and proportionality. These principles provide a foundational legal framework for assessing AI-driven surveillance and biometric technologies, particularly where data is retained and processed on a large scale without individualised justification.

Digital Rights Ireland Ltd v Minister for Communications (2014)
In Digital Rights Ireland, the Court of Justice of the European Union invalidated the EU Data Retention Directive for violating fundamental rights to privacy and data protection. The Court held that indiscriminate retention of personal data, without adequate safeguards or limitations, failed to meet proportionality standards.
The reasoning in this case is directly applicable to AI systems that rely on mass data collection and automated analysis. It reinforces the principle that technological capability cannot justify excessive intrusion into private life and that robust safeguards are essential where data processing interferes with fundamental rights.

Google LLC v CNIL (2019)
The Court of Justice of the European Union addressed the territorial scope of data protection obligations in the context of the “right to be forgotten.” The Court held that while EU data protection law applies within the Union, it does not require global de-referencing of search results.
The decision highlights the jurisdictional and enforcement challenges inherent in regulating digital technologies that operate across borders. In the context of AI, the case underscores the difficulty of ensuring accountability where algorithmic systems process data globally, often beyond the effective reach of a single legal system.

Conclusion


Artificial Intelligence represents a profound and evolving challenge to traditional legal frameworks. While AI offers significant benefits in terms of efficiency, innovation, and administrative capacity, its unregulated deployment threatens fundamental legal principles, including accountability, equality, transparency, and human dignity. The opacity of algorithmic systems and the diffusion of responsibility between developers, deployers, and regulators create serious gaps in legal protection.
A rights-based regulatory approach is essential to address these challenges. Legal frameworks must ensure transparency, meaningful human oversight, and accessible remedies for individuals affected by automated decision-making. Courts and regulators play a critical role in shaping AI governance by interpreting existing legal principles in light of technological realities. As AI continues to evolve, the law must adapt to ensure that technological progress remains consistent with constitutional values, democratic accountability, and international human rights standards.

Exit mobile version