Author: Vaishnavi.M, The Tamil Nadu Dr. Ambedkar Law University
To the Point
Artificial Intelligence is no longer science fiction it is reality. However, the law governing liability for AI’s autonomous decisions remains underdeveloped. When a self-driving car hits a pedestrian or an AI-based trading algorithm causes financial harm, the immediate legal dilemma arises: Who should be held liable the manufacturer, the developer, the user, or the AI system itself?
Indian law does not currently recognize AI as a legal entity. Thus, any wrong committed by AI must be attributed to a human or corporate actor.
This article explores:
Whether traditional legal concepts such as vicarious liability and strict liability are sufficient
The possibility of electronic personhood for AI
The importance of establishing causation and foreseeability in AI-related harms.
Use of Legal Jargon
Artificial Intelligence (AI) challenges foundational legal doctrines within tort, criminal, and regulatory law. Traditional liability frameworks including negligence, strict liability, and vicarious liability presume a human actor with legal personality and mens rea (guilty mind). However, AI systems operate autonomously and lack consciousness, thereby disrupting the conventional legal understanding of intent, culpability, and foreseeability.
The Indian Penal Code, 1860, predicates criminal liability on the presence of a guilty mind and voluntary conduct (actus reus), both of which are inapplicable to non-sentient AI entities. Similarly, under tort law, proving duty of care, breach, and proximate cause becomes problematic when the harm arises from machine learning outputs or unpredictable autonomous behavior.
In the context of civil wrongs, the doctrine of product liability is invoked to hold manufacturers or designers accountable, regardless of fault. However, when the AI evolves beyond its initial programming, determining causation becomes legally contentious. Calls for recognizing AI systems as having electronic personhood or creating a statutory no-fault liability regime reflect the inadequacy of existing anthropocentric doctrines.
Additionally, regulatory instruments like the Information Technology Act, 2000 and NITI Aayog’s policy papers provide only soft law guidance, lacking enforceable mandates. To reconcile these gaps, legal systems must evolve beyond traditional jurisprudence, embracing newer concepts like algorithmic accountability, explainable AI (XAI), and sector-specific liability statutes.
The Proof
1. European Parliament Resolution (2017): Proposed granting “electronic personality” to sophisticated autonomous robots.
2. UK House of Lords Report (2018): Emphasized the responsibility of developers and users in AI accountability.
3. NITI Aayog (India) Discussion Paper (2018): Called for legal and ethical frameworks for AI development.
4. Tort Law Principles: For negligence, one must prove duty, breach, causation, and damage.
5. IT Act, 2000 (India): Does not address liability for autonomous decision-making technologies.
6. Indian Penal Code, 1860: Presumes criminal intention (mens rea), which AI lacks.
Abstract
As Artificial Intelligence (AI) integrates into everyday life from autonomous vehicles to medical diagnostics and predictive policing the question of liability in cases of AI-induced harm has become a legal labyrinth. This article delves into the legal vacuum surrounding AI liability, examining whether current frameworks can address wrongs committed by non-human agents. It evaluates existing legal doctrines, comparative international approaches, and relevant case laws, concluding with suggestions for a coherent regulatory framework in India.
Case Laws
1. Toyota v. Williams (USA)
Though not about AI, this product liability case emphasized manufacturer responsibility for defects beyond consumer control.
2. California DMV & Uber Self-Driving Incident (2018)
Uber’s self-driving car killed a pedestrian. Uber was not criminally charged, but the backup driver was prosecuted. This highlights how blame often shifts to humans even when AI is involved.
3. . Ryan v. Victoria (Australia, 2020)
Involving a faulty AI facial recognition system used by police. The court stressed on algorithmic accountability and required transparency in AI use.
4. Shreya Singhal v. Union of India (2015
While not AI-related, the Indian Supreme Court emphasized reasonable restrictions on freedom of expression, a principle applicable to regulating AI content generation.
Conclusion
The rapid advancement and integration of Artificial Intelligence (AI) into various sectors ranging from healthcare and transportation to finance and governance have outpaced the evolution of legal frameworks. Traditional legal doctrines such as vicarious liability, product liability, and negligence offer only fragmented solutions when addressing harm caused by autonomous systems. These doctrines presuppose human control or foreseeability, which becomes problematic with self-learning, adaptive AI systems that can make independent decisions beyond the scope of their initial programming.
As AI becomes increasingly autonomous and opaque in its decision-making processes, legal systems around the world, including India, face a normative vacuum a gap that raises critical questions about attribution of responsibility, enforceability of rights, and accessibility to remedies. Without a robust and adaptive liability regime, both justice and innovation may suffer: victims may go uncompensated, and developers may operate under unclear or overly cautious constraints.
Hence, there is an urgent need for proactive legal reform that not only anticipates the unique challenges of AI but also balances accountability with technological advancement.
Recommendations:
Establish Sector-Specific AI Regulations:
Tailor liability standards for AI use across high-stakes sectors such as healthcare, autonomous vehicles, finance, and criminal justice.
For instance, medical AI diagnostics must meet clinical accuracy thresholds and be subject to periodic audits by medical authorities.
In autonomous transportation, define clear lines of responsibility between developers, manufacturers, and service providers.
Amend the Information Technology Act, 2000:
Integrate a dedicated chapter on AI liability to govern civil and criminal accountability for AI systems.
Define legal personhood or proxy responsibility for AI, clarify “duty of care” obligations, and update existing cyber tort frameworks.
Recognize novel harms such as algorithmic discrimination, data poisoning, or autonomous decision-making errors.
Create a Central AI Regulatory Authority:
Establish an independent statutory body AI Regulatory Authority of India (AIRA) to:
Certify high-risk AI systems.
Monitor compliance with ethical standards, safety benchmarks, and algorithmic transparency.
Act as a grievance redressal forum and investigate AI-induced harms.
This body can function akin to SEBI for finance or CDSCO for drug regulation.
Promote Explainable AI (XAI):
Mandate the use of interpretable algorithms or post-hoc explanation tools in critical sectors.
Encourage developers to provide transparency documentation, such as model cards and data sheets, detailing how AI models were trained, validated, and deployed.
This helps courts and regulators trace liability more accurately and protects users from opaque decision-making.
Implement Statutory AI Insurance Schemes:
Introduce compulsory AI liability insurance similar to motor vehicle insurance for high-risk AI applications.
This shifts the legal framework from fault-based models to a no-fault compensation system, ensuring timely redress for victims regardless of whether human negligence or software malfunction is proven.
Pooling risk through insurers also encourages best practices in AI development and deployment
FAQS
Q1. Can AI be held criminally liable in India?
A: No, because criminal liability requires mens rea (guilty mind), which AI lacks under Indian law.
Q2. Who is liable if AI causes harm in India?
A: Usually, the developer, manufacturer, or operator is held liable depending on the nature of the case (e.g., negligence, product liability).
Q3. Is there any Indian law that deals with AI liability?
A: No explicit law exists. The IT Act, 2000, and tort law are applied by analogy.
Q4. Can AI be granted legal personhood?
A: It’s a theoretical concept gaining traction globally but not recognized in India yet.
Q5. What is the solution to AI-related legal issues?
A: A multi-pronged approach involving regulatory reforms, insurance models, algorithmic transparency, and legal innovation.
