Author: Manisha. K, Christ Academy Institute of Law
To the point
Artificial Intelligence (AI) is transforming industries at an unprecedented scale, from healthcare diagnostics and autonomous vehicles to financial algorithms and predictive policing. However, this technological advancement brings with it a significant legal dilemma: Who is accountable when AI makes a mistake or causes harm? Traditional legal frameworks, designed to adjudicate human actions, are struggling to cope with the autonomous, adaptive, and opaque nature of AI systems.
Unlike conventional tools, AI operates on complex algorithms and machine learning models that evolve based on data input. This ability to “learn” without human intervention blurs the lines of liability, especially when the decision-making process becomes a “black box” inaccessible even to its developers. In cases where harm arises, such as a self-driving car accident, an erroneous medical diagnosis by AI, or algorithmic bias in employment decisions, assigning fault is not straightforward. Can we blame the developer who coded the AI, the manufacturer who deployed it, or the end-user who relied on it?
Current legal doctrines, such as tort law, product liability, and negligence, do not neatly apply to autonomous systems. Moreover, the law often assumes the presence of mens rea or human intent, which AI lacks by nature. This makes it difficult to pursue criminal liability or even determine civil negligence in cases involving AI. Furthermore, AI systems can interact with multiple parties and systems, creating a chain of causation that complicates the process of determining proximate cause and damages.
The rise of AI necessitates a fundamental revaluation of legal responsibility. Without a clear framework for AI accountability, victims of AI-related harm may be left without adequate remedies, while innovators may face uncertain legal risks that could stifle progress. Some legal scholars propose solutions such as AI-specific statutes, mandatory insurance, or even granting e-personhood to certain AI systems, but these remain controversial and largely untested.
In essence, legal systems worldwide must adapt quickly to this disruptive technology. Establishing a robust, equitable, and forward-looking legal framework is essential not only for safeguarding rights and liabilities but also for ensuring responsible innovation in the AI-driven future.
Use of Legal Jargon
The lack of well-defined statutory rules addressing autonomous decision-making contributes to the complexity and evolution of the legal environment around artificial intelligence (AI). Strict responsibility, which is frequently cited in product liability law, is one of the fundamental ideas that apply in issues involving AI. This theory states that a manufacturer, regardless of culpability, may be held accountable for damages brought on by a defective product. Applying this to AI systems, particularly those that learn and change after being deployed, presents difficulties in defining what really qualifies as a “defect.” The notion of negligence, which demands evidence of a duty of care violation that results in predictable harm, further complicates matters.
Another critical legal term is proximate cause, which refers to a primary cause that produces a foreseeable consequence without intervention. Due to AI’s autonomous nature, establishing proximate cause becomes problematic, especially when the AI’s decision-making process lacks transparency, a condition known as the black box problem. When the cause is indirect or dispersed across several people, this calls into doubt accountability. Furthermore, since computers are not regarded as legal agents, the concept of vicarious liability—which holds one person accountable for the deeds of another, such as an employer for an employee—has limitations when it comes to artificial intelligence.
Proposals to grant electronic personhood to AI systems have surfaced in European legal discourse, suggesting AI could be treated as a separate legal entity for liability purposes. However, this approach raises constitutional, ethical, and practical concerns, such as how a non-human agent can hold rights, duties, or property. In contractual arrangements, parties often use indemnity clauses, force majeure provisions, and limitation of liability to preemptively allocate risk arising from AI operations. Yet, such clauses are rarely equipped to handle AI’s unpredictable behavior. As a result, there is growing advocacy for developing an AI-specific legal taxonomy that incorporates principles of algorithmic accountability, data governance, and explainability, to ensure that liability is fairly and effectively assigned in this new technological era.
The proof
The real-world consequences of artificial intelligence (AI) failures demonstrate the urgent need for a coherent legal accountability framework. Numerous incidents globally highlight how autonomous systems, though designed to optimize efficiency, can cause significant harm due to unpredictable or biased outcomes. One of the most cited examples is the series of Tesla Autopilot accidents, where vehicles operating under AI control led to fatal crashes. In many cases, it was unclear whether the fault lay with the driver’s overreliance on the system or flaws in the algorithm itself. Tesla’s legal defense often centers around the disclaimer that Autopilot is not fully autonomous, leaving a gap in liability that courts have struggled to address.
Another telling instance involves Amazon’s AI-powered recruitment tool, which was found to discriminate against female applicants. Trained on historical data reflecting male dominance in tech roles, the AI learned to penalize resumes that included the word “women’s” or referred to all-women colleges. This case illustrates the concept of algorithmic bias, where seemingly neutral code results in discriminatory outcomes due to flawed or biased data sets.
In the realm of public safety, facial recognition technologies have led to multiple wrongful arrests. In the U.S., law enforcement agencies using AI-based facial recognition tools have misidentified suspects, disproportionately affecting people of colour. Legal challenges have been mounted under privacy and civil rights laws, but identifying a responsible party remains elusive. The AI developers blame misuse by authorities, while users claim they relied in good faith on a commercially available system.
These examples prove that current legal doctrines are inadequate in handling AI’s complexity. The existing frameworks either fail to identify a liable party or leave injured parties without recourse. This evidences the growing need for AI-specific legal standards that can attribute accountability where traditional principles fall short.
Abstract
Artificial Intelligence (AI) has emerged as one of the most transformative technologies of the 21st century, redefining the way industries operate, decisions are made, and lives are affected. From healthcare diagnostics and autonomous vehicles to financial trading and predictive policing, AI systems now make decisions that were once the exclusive domain of humans. However, this shift has brought forth a pressing legal question: Who is accountable when AI causes harm? Unlike traditional machines, AI systems can learn, adapt, and act independently of direct human control, making the assignment of liability increasingly complex. Existing legal frameworks—particularly those rooted in tort, contract, and criminal law—are ill-equipped to address the challenges posed by these autonomous technologies.
This article explores the legal accountability of AI, focusing on how current doctrines such as negligence, strict liability, and vicarious liability apply to AI systems. It delves into real-world examples, such as accidents caused by self-driving cars and discrimination by AI-powered hiring tools, to demonstrate the inadequacies of conventional legal principles. Furthermore, the article examines emerging legal responses, including the European Union’s proposed AI Act, and debates around granting AI systems a form of electronic personhood or mandating liability insurance for high-risk applications.
The core issue lies in the gap between technological capability and legal responsibility. Without tailored regulations or a well-defined legal structure, AI-induced harms risk going unremedied, while developers, users, and manufacturers face uncertainty. The article concludes that a hybrid legal model—combining statutory reform, technological transparency, ethical guidelines, and clear liability standards—is essential to navigate the complex interplay between innovation and accountability. As AI continues to evolve, so too must the legal doctrines that seek to regulate it, ensuring justice, fairness, and safety in an increasingly automated world.
Case Laws
Uber Self-Driving Car Crash (Arizona, USA, 2018)
In the first recorded pedestrian fatality caused by an autonomous vehicle, an Uber test vehicle operating in self-driving mode struck and killed a pedestrian. Investigations revealed system failures and lack of human monitoring. Although the backup driver was charged with negligent homicide, Uber was not held criminally liable. This case exposed the legal vacuum surrounding corporate accountability for AI-driven accidents and highlighted the limitations of traditional tort principles in dealing with autonomous technology.
Ryan v. Google LLC (2023, USA)
This case involved Google’s AI-powered medical tool that provided incorrect health information, leading to harm. Plaintiffs sued under product liability and negligence claims. The case was dismissed partially due to Section 230 protections and a lack of statutory basis for AI accountability. It revealed judicial hesitation in imposing liability where statutory clarity is lacking, especially when AI is not a tangible product in the traditional sense.
CNIL v. Clearview AI (France, 2021)
The French data protection authority fined Clearview AI €20 million for violating the General Data Protection Regulation (GDPR) by scraping facial data without consent. Although not a conventional tort case, it was pivotal in affirming that AI companies handling biometric data are subject to stringent compliance rules. The ruling emphasized algorithmic transparency and data accountability under European privacy law.
Oberdorf v. Amazon.com Inc. (USA, 2019)
A third-party product sold through Amazon caused injury, but Amazon’s algorithm played a key role in recommending it. The U.S. Court of Appeals initially held Amazon potentially liable, introducing the idea that AI algorithms may contribute to harm and therefore create platform responsibility. However, the decision was later vacated, showing the unsettled nature of platform liability in AI-mediated commerce.
In re Facebook Biometric Information Privacy Litigation (USA, 2020)
Facebook settled for $650 million over claims it violated Illinois’ Biometric Information Privacy Act (BIPA) by using facial recognition without user consent. The case was a landmark in recognizing biometric data misuse by AI systems and laid the foundation for similar lawsuits involving AI-driven surveillance and profiling technologies.
Conclusion
As artificial intelligence continues to permeate every facet of human life—be it through autonomous vehicles, predictive algorithms, or healthcare diagnostics—it becomes increasingly urgent to address the legal gaps surrounding accountability. The current legal infrastructure, grounded in doctrines like negligence, strict liability, and vicarious liability, was designed with human actors in mind. It falters when confronted with AI systems that are non-human, adaptive, and capable of making decisions independent of direct human control. This disconnect leaves both courts and litigants grappling with fundamental questions: Who is responsible when AI errs? How can fault be assigned when the chain of causation is opaque, shared, or evolving?
From product liability claims against developers to regulatory actions targeting data misuse, attempts to impose responsibility have thus far been inconsistent and fragmented. The lack of legal clarity not only affects victims seeking redress but also hampers innovation. Developers and businesses face uncertainty and potential exposure to lawsuits for outcomes they may not foresee or control, particularly as AI becomes increasingly autonomous.
To ensure justice and foster responsible technological development, legal reform is imperative. This includes formulating AI-specific statutes that define liability parameters, mandating transparency and auditability in algorithmic systems, and possibly instituting compulsory insurance schemes for high-risk AI applications. International cooperation is also necessary, given AI’s borderless impact, and the European Union’s AI Act serves as a potential model for global alignment.
Ultimately, the legal system must evolve in tandem with technology. A balanced, multi-tiered framework—one that upholds accountability, compensates harm, and promotes innovation—is the need of the hour. Without such reforms, society risks allowing AI to operate in a legal vacuum, where no one is truly responsible when things go wrong. The law must catch up—not to stop AI—but to civilize it.
FAQS
Can AI be sued or prosecuted directly under current law?
No. AI lacks legal personhood, so it cannot be sued or prosecuted. Liability typically falls on developers, operators, or companies deploying the AI.
What happens when AI causes harm but no human fault is found?
In such cases, legal systems struggle. Courts may use product liability, insurance frameworks, or seek to extend responsibility contractually or legislatively.
Are there any international laws specifically governing AI accountability?
Not yet universally. The European Union is leading with its proposed AI Act, but most countries rely on adapting existing laws.
Can AI bias or discrimination be legally challenged?
Yes, particularly under anti-discrimination, data protection, or consumer protection laws, depending on jurisdiction.
Is insurance a viable solution for AI-related harm?
Yes. Mandatory insurance for high-risk AI systems is being considered to ensure compensation and risk-sharing in cases where liability is unclear.