AI and Legal Liability: Challenges of Accountability in Autonomous Decision-Making

Author: Manyata sisodia, Student of  Guru Gobind Singh Indraprastha University

Abstract

Artificial Intelligence (AI) has rapidly evolved into a transformative technology reshaping industries such as healthcare, finance, transportation, and law enforcement. Its increasing autonomy, however, raises profound questions about legal liability and accountability. When AI systems make erroneous, harmful, or discriminatory decisions, identifying and holding the responsible party legally accountable becomes a complex task. Traditional liability frameworks, which rely heavily on human agency, intentionality, and foreseeability, often prove inadequate in the AI context. This paper explores the challenges of attributing liability in AI-driven environments, evaluates existing legal doctrines and their applicability, and proposes recommendations for legal reform. The study draws on comparative jurisprudence from jurisdictions such as the European Union, United States, and India, and examines emerging regulatory frameworks to offer a comprehensive perspective on AI accountability. It emphasizes the necessity of striking a balance between encouraging innovation and protecting fundamental rights and safety.

  1. Introduction

Artificial Intelligence, once largely a theoretical concept confined to the realm of science fiction, has now permeated almost every facet of daily life. AI systems play critical roles in decision-making processes ranging from loan approvals and medical diagnostics to autonomous vehicle navigation and predictive policing. As AI assumes greater autonomy, a fundamental legal challenge emerges: Who bears responsibility when an AI system causes harm or violates rights?

The question of liability is not merely academic but has real-world consequences for victims seeking redress and for companies aiming to innovate responsibly. Traditional legal doctrines rely on concepts such as mens rea (criminal intent) and actus reus (guilty act), both of which are premised on human agency. However, AI systems, being non-human actors, lack consciousness and intent. This disrupts conventional frameworks for attributing fault and compensation.

This paper delves into the doctrinal, ethical, and policy-oriented dimensions of AI liability, investigating both theoretical challenges and practical implications. It also analyzes ongoing legal reforms and debates, offering a holistic understanding of how legal systems might evolve to address AI’s distinct characteristics.

  1. Legal Personhood and Agency

A core challenge in attributing legal liability to AI systems concerns the issue of legal personhood. Traditionally, legal personhood confers the capacity to hold rights and bear duties, enabling an entity to be sued or to sue in its own name. Natural persons (humans) inherently possess this status, while certain artificial persons, like corporations, enjoy it by legal fiat.

In the seminal case of Salomon v A Salomon & Co Ltd, the House of Lords affirmed that a corporation is a separate legal entity distinct from its shareholders.^1 This recognition facilitates the imposition of liabilities and rights on corporations as if they were individuals, enabling smoother legal and commercial transactions.

Drawing a parallel, some legal scholars argue for the recognition of AI systems as a new category of legal entities, often referred to as “electronic persons.”^2 Such recognition could theoretically allow AI to bear limited legal duties and liabilities. However, this proposition faces formidable hurdles, not least because AI lacks consciousness, intentionality, and moral agency—qualities central to traditional personhood.

The European Parliament considered this notion in a 2017 resolution but ultimately rejected it, citing ethical, practical, and policy concerns.^3 Assigning personhood to AI risks undermining human accountability and complicates liability frameworks. For now, legal systems generally refrain from granting AI personhood and instead focus on human actors involved in AI development, deployment, and use.

  1. Doctrinal Bases of AI Liability

Existing doctrines of liability provide a starting point for addressing AI-related harms. Three primary legal doctrines are relevant: product liability, vicarious liability, and negligence.

  1. Product Liability

AI systems embedded in physical devices, such as autonomous vehicles, medical robots, or smart home devices, are often treated as products under consumer protection laws. Product liability doctrines impose strict liability on manufacturers for defects causing harm, irrespective of fault.

The foundational case of Donoghue v Stevenson established the “neighbour principle,” imposing a duty of care on manufacturers toward end-users.^4 This doctrine can be adapted to AI products: if an autonomous vehicle’s AI software contains design flaws leading to accidents, the manufacturer may be strictly liable under consumer protection statutes and tort law.^5

Challenges arise in defining what constitutes a defect in software that evolves autonomously via machine learning. Determining whether harm results from defective design, negligent updating, or unforeseeable AI behavior is complex and may necessitate doctrinal adaptation.

  1. Vicarious Liability

Vicarious liability holds principals responsible for the acts of their agents. When AI systems function as agents within organizations, deployers may bear liability for harms caused within the scope of AI’s operations.

For example, a hospital deploying an AI diagnostic system could be liable if the AI misdiagnoses a patient, analogous to an employer’s liability for employee negligence. However, AI’s capacity to learn and deviate from initial programming complicates attribution. Unlike human agents, AI systems lack intent, making it difficult to classify actions as within or outside the scope of employment.^6

3.3 Negligence

Negligence involves a breach of a duty of care resulting in foreseeable harm. Developers and deployers of AI owe duties to ensure the safety, reliability, and fairness of their systems.

The test articulated in Caparo Industries plc v Dickman—foreseeability, proximity, and fairness—guides the establishment of duty of care.^7 Courts may find negligence where AI developers fail to adequately train, test, or monitor their systems, or where deployers neglect oversight.

However, establishing causation is difficult in cases where AI decisions are complex, probabilistic, or involve multiple interacting systems.

  1. The Black Box Problem: Challenges in Explainability

Modern AI systems, particularly those employing deep learning algorithms, are often described as “black boxes” because their internal decision-making processes are opaque—even to their creators. This lack of explainability creates a significant barrier to legal accountability.

For liability claims to succeed, plaintiffs must typically demonstrate causation and fault. When AI systems generate outputs without interpretable reasoning, courts face difficulties in scrutinizing decisions or identifying responsible parties.

This issue also implicates due process rights, particularly in administrative or criminal contexts where AI-driven decisions can significantly affect individuals’ rights. Article 22 of the General Data Protection Regulation (GDPR) grants individuals the right not to be subject to decisions based solely on automated processing.^8 The GDPR also mandates “meaningful information” about the logic involved in such decisions.

Legal scholarship debates the feasibility of a “right to explanation” in AI decisions, with some arguing it is illusory given current AI architectures.^9 Nevertheless, regulatory frameworks increasingly emphasize transparency and human oversight as safeguards.

  1. Comparative Legal Approaches
  1. European Union

The European Union has taken a proactive approach toward AI regulation. The Artificial Intelligence Act, finalized in 2024, categorizes AI systems by risk levels and imposes strict requirements on high-risk AI applications, such as biometric identification and critical infrastructure.^10

The Act does not grant legal personhood to AI but enforces accountability through obligations on providers and users, including transparency, human oversight, and post-market monitoring. Impact assessments and conformity evaluations are mandatory before deployment.

The EU also maintains robust data protection standards under the GDPR, which intersect with AI liability by protecting individuals from harmful automated decisions.

  1. United States

The U.S. adopts a more fragmented, sector-specific regulatory approach. The Food and Drug Administration (FDA) oversees AI used in medical devices under its Software as a Medical Device (SaMD) framework, requiring rigorous testing and post-market surveillance.^11

Litigation such as Doe v. Uber Technologies illustrates corporate liability for harms arising from AI deployment, emphasizing the role of human oversight and corporate responsibility over AI autonomy.^12

The U.S. also permits contractual disclaimers and arbitration agreements, which can limit litigation exposure for AI deployers, raising concerns about victim access to remedies.

5.3 India

India currently lacks a dedicated AI regulatory framework. Liability issues are addressed through general tort principles, the Information Technology Act, 2000, and consumer protection laws.

NITI Aayog’s 2020 strategy document advocates the establishment of regulatory sandboxes for AI, enabling controlled experimentation with reduced compliance burdens.^13 The landmark Supreme Court decision in K.S. Puttaswamy v Union of India enshrined the right to privacy, shaping future AI data protection and accountability regimes.^14

  1. Models of Legal Responsibility

In response to the gaps in traditional liability doctrines, scholars and policymakers have proposed innovative models tailored to AI’s unique nature.

  1. Strict Liability for High-Risk AI

Under strict liability, deployers are held liable for harm regardless of fault or negligence. This model encourages rigorous safety standards and aligns with the precautionary principle, especially for AI systems with significant potential for harm.

  1. Shared Liability Model

This approach distributes responsibility among developers, users, data providers, and regulators. It acknowledges the complexity of AI ecosystems where multiple actors contribute to outcomes. Shared liability models resemble frameworks in environmental and product safety law.

6.3 Insurance-Based Model

Mandatory insurance schemes could provide victims with swift compensation while spreading risk. Analogous to motor vehicle insurance, such schemes are being tested in EU autonomous vehicle regulations, ensuring financial protection despite attribution difficulties.^15

  1. AI Accountability Registries

Blockchain and distributed ledger technologies offer mechanisms for maintaining immutable, auditable logs of AI decision processes. Such registries can facilitate post-incident investigations and compliance verification, enhancing transparency and trust.^16

  1. Ethical and Constitutional Concerns

AI liability is inextricably linked with broader ethical and constitutional issues. Biases embedded in training data have led to discriminatory AI systems, including racist facial recognition software and sexist hiring algorithms.

The deployment of AI in judicial sentencing (e.g., the COMPAS risk assessment tool in the U.S.) raises grave concerns about fairness, due process, and equal protection under the law.

In India, Article 14 of the Constitution guarantees equality before the law and prohibits arbitrary state action. AI systems employed by the government must satisfy the reasonableness and non-arbitrariness tests articulated in E.P. Royappa v State of Tamil Nadu.^17

Safeguarding fundamental rights necessitates not only legal accountability but also ethical AI design, inclusive data sets, and ongoing oversight.

  1. Conclusion

Artificial Intelligence offers tremendous promise for innovation and societal benefit but simultaneously poses unprecedented legal challenges. Existing liability doctrines—product liability, negligence, and vicarious liability—provide partial solutions but are insufficient to address AI’s autonomy, opacity, and complexity fully.

Granting legal personhood to AI remains a contentious and currently impractical approach. Instead, legal frameworks should focus on holding human actors accountable, enforcing transparency, and incentivizing robust safety measures.

Innovative models such as strict liability for high-risk AI, shared responsibility, mandatory insurance, and AI accountability registries can help fill the regulatory void. Moreover, harmonizing AI laws across jurisdictions and embedding ethical principles into AI development will be vital.

Ultimately, the law must strike a delicate balance: protecting rights and safety without stifling the innovation that AI promises. Continued interdisciplinary dialogue among technologists, lawyers, policymakers, and civil society is crucial to shaping a just and effective legal framework for AI accountability.

Footnotes

  1. Salomon v A Salomon & Co Ltd, [1897] AC 22 (HL).
  1. Mireille Hildebrandt, “Legal Personality for Artificial Agents? The European Parliament’s Resolution on Civil Law Rules on Robotics” (2017) 4(2) European Journal of Law and Technology.
  1. European Parliament, Report with recommendations to the Commission on Civil Law Rules on Robotics, 2017/2103(INL).
  1. Donoghue v Stevenson, [1932] AC 562 (HL).
  1. Restatement (Third) of Torts: Products Liability § 1 (1998).
  1. Carol Choksy, “Artificial Intelligence and Liability: The Agent Problem” (2020) 66 DePaul Law Review 41.
  1. Caparo Industries plc v Dickman, [1990] 2 AC 605 (HL).
  1. GDPR, Regulation (EU) 2016/679, art. 22.
  1. Sandra Wachter, Brent Mittelstadt & Luciano Floridi, “Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation” (2017) 7 International Data Privacy Law 76.
  1. European Commission, Proposal for a Regulation laying down harmonized rules on Artificial Intelligence (Artificial Intelligence Act) COM/2021/206 final.
  1. U.S. Food & Drug Administration, Artificial Intelligence and Machine Learning in Software as a Medical Device (SaMD), Draft Guidance (2021).
  1. Doe v. Uber Technologies, Inc., No. 3:20-cv-05427 (N.D. Cal. 2020).
  1. NITI Aayog, National Strategy for Artificial Intelligence (2020).
  1. K.S. Puttaswamy v Union of India, (2017) 10 SCC 1 (India).
  1. European Parliament and Council, Regulation on type-approval of motor vehicles with respect to automated driving systems, (2023).
  1. Philipp Hacker et al., “Accountability in the Age of Artificial Intelligence: Law and Ethics” (2018) 11 International Data Privacy Law 193.
  1. E.P. Royappa v State of Tamil Nadu, AIR 1974 SC 555 (India)Type equation here.

Leave a Reply

Your email address will not be published. Required fields are marked *