Site icon Lawful Legal

Who is Liable When AI Makes a Mistake? – Tort Law Perspectives

Author: Abhinav Mishra, a student at Faculty of Law, Banaras Hindu University

Abstract

As artificial intelligence systems increasingly operate autonomously in sectors such as healthcare, transportation, and finance, the question of liability becomes critical. This article examines how existing tort law principles—such as negligence, product liability, and vicarious liability—apply to harm caused by AI. Through legal reasoning, statutory frameworks, and judicial precedents, the article explores doctrinal gaps and proposes approaches to attribute responsibility.

To the Point

Artificial Intelligence (AI) systems are now capable of making decisions without direct human intervention. When these systems malfunction or make erroneous decisions leading to harm, traditional tort law struggles to identify a liable party. The law generally requires a duty of care, breach, causation, and damage for liability to arise. In the AI context, the ‘actor’ is neither human nor legally responsible. Hence, assigning liability becomes complex. This necessitates a doctrinal reevaluation of existing liability norms to encompass non-human agents.

Use of Legal Jargon

The application of traditional tort doctrines—negligence, vicarious liability, and strict liability—to AI errors demands revisiting core legal concepts. Negligence requires the establishment of a duty of care, a breach of that duty, causation, and resultant damage. In AI systems, identifying who owed a duty—be it the developer, deployer, or user—is not always straightforward. For example, if a surgical robot malfunctions during an operation, can the hospital be held liable, or should the software developer bear responsibility? 

The “reasonable person” standard, central to negligence, becomes difficult to apply when decisions are made by non-human agents. Furthermore, foreseeability—a prerequisite to liability—becomes debatable when AI outcomes are non-transparent due to their black box nature. If a decision pathway cannot be explained or understood even by its creators, establishing foreseeability becomes legally tenuous. Vicarious liability—where an employer is held liable for the acts of an employee—fails to comfortably accommodate AI, which lacks legal personhood. However, some suggest treating AI as an “instrument” of its deployer, akin to a tool or machine, thus anchoring liability in the entity that controls or benefits from it. 

Strict liability, especially in high-risk environments, may be a more appropriate solution. Borrowed from the Rylands v. Fletcher principle, it posits that if a person keeps a dangerous thing and it escapes causing damage, they are liable irrespective of fault. This could analogously apply to autonomous vehicles or industrial robots operating in sensitive environments. 

The Proof

  1. Uber Self-Driving Car Case (2018)
    Elaine Herzberg was struck and killed by an autonomous Uber vehicle in Arizona. Though a backup driver was present, the AI was in full control. Investigations revealed that the system failed to classify the pedestrian correctly. Uber avoided criminal charges, but the case highlighted gaps in assigning civil liability when AI is the operating entity.
  1. IBM Watson for Oncology
    Once touted as a breakthrough in cancer treatment recommendations, IBM’s Watson reportedly gave unsafe suggestions due to training on hypothetical, non-real-world data. While no deaths occurred, the incident underscores the potential consequences of AI errors in critical sectors like healthcare.
  1. Amazon’s Discriminatory Hiring Algorithm
    Amazon had to discontinue its AI hiring tool after it was found to be biased against women. The tool penalized resumes with the word “women” and prioritized male-dominated experiences, reflecting training data biases. This is a clear example of algorithmic discrimination with real-world legal consequences.
  1. Knight Capital Trading Glitch (2012)
    Although not a strict AI failure, Knight Capital’s automated system caused a $440 million loss in 45 minutes due to erroneous code deployment. This case demonstrates how autonomous decisions, when unchecked, can result in substantial financial liability.

Each of these cases underscores that legal frameworks struggle to pinpoint responsibility, especially when the malfunction is not due to negligence in the traditional sense but rather due to algorithmic unpredictability or design bias.

Case Laws

Domestic (India)

  1. M.C. Mehta v. Union of India (1987 AIR 1086)
    Introduced the principle of absolute liability in India, which holds industries dealing with hazardous materials strictly liable even without negligence. This precedent could be extended to high-risk AI systems.
  1. Vidyut Barve v. Union of India (2020)
    Though not directly AI-related, the court discussed the importance of environmental impact assessment for technological risks, paving the way for future judicial scrutiny of AI risk.

International

  1. Donoghue v. Stevenson [1932] AC 562 (UK)
    Established the concept of duty of care, which forms the bedrock of negligence. In the context of AI, it raises questions—who is the “neighbour” when harm is caused by a machine?
  1. Rylands v. Fletcher (1868) LR 3 HL 330 (UK)
    Classic case on strict liability. Analogous when autonomous systems like drones or self-driving cars are involved in public incidents.
  1. European Parliament’s AI Civil Liability Report (2020)
    Proposes a strict liability regime for high-risk AI and a fault-based system for lower-risk applications. EU’s approach is currently the most evolved regulatory model.
  1. Elaine Herzberg v. Uber (2018, U.S.)
    Set off global debates around assigning criminal and civil responsibility in AI-operated vehicles.

Conclusion

AI is neither inherently malicious nor benevolent—it is a tool shaped by human choices, data, and intentions. But its increasing autonomy and opacity challenge foundational legal doctrines, especially in tort law. The absence of intent, the unpredictability of AI decisions, and the multiplicity of actors involved in its lifecycle complicate traditional liability attribution. India, like many jurisdictions, currently lacks specific legislation governing AI liability. 

Until such laws are enacted, courts may have to interpret and adapt existing tort principles, potentially expanding doctrines such as strict liability or creating new judicial interpretations around “culpable programming.” Moving forward, legislative reform is critical. A hybrid liability model that combines strict liability for high-risk AI deployments, mandated explainability, and developer due diligence obligations may provide a balanced approach. 

Furthermore, establishing an AI Compensation Fund or mandatory liability insurance could support victims of AI-inflicted harm without stifling innovation. Ultimately, the goal must be to create an environment of trustworthy AI, where safety, fairness, and accountability are legally and ethically embedded in design and deployment.

FAQs

  1. Can AI be held legally responsible in India?

No. Under current law, AI lacks legal personhood and cannot be sued or held criminally liable. Liability must attach to a human or a legal entity like a company.

  1. Who is generally liable for AI-caused harm?

Liability may fall on developers, manufacturers, users, or deployers depending on the circumstances. Courts analyze control, foreseeability, and negligence to assign responsibility.

  1. Is there a law in India specifically addressing AI-related harm?

No, but general tort law, product liability principles, and IT Act provisions may be invoked. Future AI-specific legislation is being considered under India’s National Strategy for AI (NITI Aayog).

  1. What legal framework is developing globally?

The European Union is leading with the proposed AI Act and Civil Liability regime, emphasizing risk-based regulation. The US and UK are still using a sectoral, case-by-case approach.

  1. What is the role of insurance in AI liability?

Several experts propose mandatory insurance for high-risk AI systems, similar to motor vehicle insurance, to ensure victims are compensated even when fault is hard to prove.

Exit mobile version