The Legal Personality of AI: Should AI Systems Be Held Liable for Harm?

Author: Khadijah Khan, a student at the School of Law, UPES Dehradun

To the Point

Artificial Intelligence (AI) is no longer passive. From driverless cars to virtual assistants and predictive policing, AI systems today are capable of making decisions independently, sometimes with minimal or no human intervention. But if such a system causes harm, such as an autonomous vehicle killing a pedestrian or a health bot misdiagnosing a patient, who should be held accountable?

Today, AI is not punishable or responsible. The law makes developers, users, or businesses responsible. However, this model gets blurry if AI acts unexpectedly or independently. Juridical constructs are attempting to determine whether AI continues to be considered a juridical person, like corporations, or remains a product.

This paper discusses whether legal personality can be granted to AI systems such that they become severally liable for harm caused by them. It considers the existing laws, loopholes in the current legal system, relevant case laws, and suggested reforms, specifically in India.

Use of Legal Jargon

Legal Personality: Legal personality of an entity (like a company) to be endowed with duties and rights, with the right to sue or be sued.

Strict Liability: Liability without the need to prove intention or absence of negligence. Used in product liability cases.

Vicarious Liability: Where one person (like an employer) is held liable for the actions of another (like an employee).

Mens Rea: Literally “guilty mind,” which is the heart of criminal law. AI does not have this.

Product Liability: When a seller or manufacturer is legally responsible for a faulty product.

Black Box AI: AI systems that make decisions in a way that can’t be understood or explained.

Algorithmic Bias: When AI systems produce discriminatory or unequal outcomes based on biased data or biased design.

The Proof

Autonomous abilities of AI are growing very fast

AI systems are no longer constrained to straightforward automation. Generative AI (such as ChatGPT, DALL·E, etc.) can now write, create, and make suggestions, while machine learning algorithms run in the background of significant infrastructure in banking, healthcare, surveillance, and even war. Self-driving cars, surgical robots powered by AI, and automatic weapon systems are all instances of systems making decisions in real-time without human intervention. These systems learn and adapt—i.e., they may act in a manner not explicitly programmed by their creators.

Accountability is Frequently Diffused

In cases of injury, it is difficult to identify the “culpable” party. Take a predictive policing AI that results in wrongful arrest. Should fault be assigned to the data scientists who trained it, the police department for using it, the tech company for creating it, or the AI itself? Classic liability models presume a direct causal connection between act and harm. With AI, that connection is frequently obscured by multiple contributory agents and self-improving algorithms.

The Problem of the ‘Black Box’

Advanced AI systems—especially those using deep learning—are often referred to as black boxes because their internal decision-making process is opaque, even to their creators. This makes it difficult to audit or explain how a particular decision (such as denying a loan or misidentifying a suspect) was reached. This lack of explainability poses serious problems for legal accountability, especially under due process and natural justice principles.

AI’s Influence Over Human Rights

AI has implications for basic rights like privacy, freedom of expression, equality, and due process. For example, facial recognition technologies employed by governments and companies are a cause of concern under Article 21 (Right to Life and Personal Liberty) of the Indian Constitution. Access Now, in its 2020 report, detected that at least 64 states were employing AI-based surveillance, frequently not under legal protection. 

The European Union’s Risk-Based Approach

The EU Artificial Intelligence Act (AIA) puts AI systems into groups based on how risky they are, such as unacceptable, high-risk, limited-risk, and minimal-risk. High-risk applications (such as those applied in critical infrastructure, law enforcement, and recruitment) will be under stringent legal obligations, such as transparency requirements and human monitoring. While the AIA does not grant legal personhood to AI, it nonetheless explicitly acknowledges the autonomy and potential harm caused by these systems and seeks to regulate them accordingly.

Insurance-Based Proposals for AI Liability

Others have suggested that, rather than granting full legal personhood to AI, developers and deployers can be made to buy mandatory insurance, much like motor vehicles. This way, victims would be compensated even when fault or negligence may not be easily determined. This no-fault regime of liability is particularly well-suited for AI incidents where the damage is genuine but legal causation is uncertain.

Economic and Social Stakes Are Increasing

A 2023 estimate from McKinsey indicates that AI might add $13 trillion to the global economy by 2030. This potential expansion carries certain risks, such as job loss, biased profiling, cybersecurity threats, and incorrect decision-making. The NITI Aayog in India predicts that AI could enhance the nation’s GDP by $500 billion by 2025. Nevertheless, India currently lacks specific laws regarding AI use, accountability, and consumer protection against AI-related risks.

Comparative Models of Legal Personhood

Legal personality has long been attributed to non-human actors such as corporations, governmental institutions, and rivers. For instance, in Mohd. Salim v. State of Uttarakhand (2017), the rivers Ganga and Yamuna were accorded legal personhood. If rivers and corporations are granted rights and liabilities under law, why not highly autonomous AI systems capable of having real-world effects?

The Need for Algorithmic Accountability in Public Functions

Public authorities are increasingly employing AI for administrative decisions, ranging from eligibility for benefits to the detection of tax fraud. In such instances, the public has a right to equitable treatment, explanation, and appeal, which present laws fail to impose clearly when the decision-maker is an AI. The absence of a formal appeal mechanism when aggrieved by an AI decision demonstrates a gap in regulation.

Absence of Statutory Remedies in India

In contrast to the EU General Data Protection Regulation (GDPR) or the EU AI Act in development, India does not have specific legislation on AI liability. The Digital Personal Data Protection Act, 2023, also offers some safeguards for data use but does not address harms resulting from AI-based choice or action. Victims of AI malfunctioning in India must, as of now, fall back on general legal principles such as negligence or claims of defective products, which are not always sufficient.

Abstract

As artificial intelligence becomes wiser and more self-sufficient, it poses new issues for the law. The essential question is whether AI can be treated as a legal person, one that can be held accountable for the damage it produces. This article dissects current legal frameworks, identifies their shortcomings in addressing AI personality, to AI requires reasonable and required intervention. The article concludes by suggesting a careful but methodical approach to AI liability, particularly for the Indian legal framework.

Case Laws

1. Uber Autonomous Car Crash – Arizona, USA (2018)

A self-driving Uber car hit and killed a pedestrian. Investigations revealed that the AI system failed to recognize the pedestrian promptly. While legal action was taken against the human safety driver, the AI system itself could not be held accountable in a legal sense, as it lacks a defined legal status.

2. Loomis v. Wisconsin (U.S. Supreme Court, 2017)

The court applied a proprietary risk-assessment algorithm in sentencing. The defendant claimed the AI was biased and not transparent. Although the court defended its use, this case raised significant questions regarding algorithmic due process and fairness.

3. Tesla Autopilot Accidents – Ongoing (USA)

Several cases involving Tesla’s Autopilot feature resulted in fatalities or injuries. In most cases, Tesla has contended that the drivers abused the technology. However, the difference between driver fault and AI malfunction is still ambiguous.

4. Facebook–Cambridge Analytica Scandal

Algorithms that were made to influence the behavior of voters were of global concern. This case did not involve physical harm but illustrated how powerful AI systems can be to impact democratic processes and how there were no legal protections in place.

5. Imaginary Indian Situation – Abuse of Facial Recognition

Suppose an AI face recognition system misidentifies a suspect, and the person is arrested. Without an AI liability law, the individual would sue the police department, not the system or the developer, exposing the gap in India’s law’s accountability.

Conclusion 

AI systems are not merely tools anymore; they’re decision-making entities that can influence human lives, money, freedoms, and even democracy. As they become increasingly independent, the legal system is confronted with an inherent dilemma: how to make them accountable?

While full legal personhood for AI might still not be suitable at present, regarding them as mere tools is also insufficient. Perhaps the solution would be a balanced combination accepting some autonomous AI systems as legally responsible “agents” but maintaining human direction and control.

India needs to lead the way by:

  • Enacting legislation specific to AI
  • Establishing types of “high-risk AI” systems
  • Enforcing compulsory insurance and safety standards
  • Establishing a regime of civil liability for AI-caused harm
  • Enabling courts and regulators to carry out algorithmic audits

The law needs to change, not to dampen innovation, but to provide accountability in an AI age.

FAQs 

Q1: What gives legal personality to AI?

It is treating AI as a legal entity similar to a corporation that can own property, be sued, or be held liable for harm.

Q2: Is AI already a legal person anywhere in the world?

No. There have been proposals (e.g., by the EU), but no state has conferred legal personhood on AI yet.

Q3: How is the liability of AI dealt with in India currently?

According to current legislation, such as the IT Act, 2000, and the Consumer Protection Act, 2019, these laws were not framed with autonomous AI in mind. 

Q4: Risks of attributing legal personality to AI.

It can allow actual human actors (developers, firms) to escape liability and pose ethical issues by comparing machines to people or corporations.

Q5: Is AI punishable under criminal law?

No. Criminal law demands intent (mens rea), which AI does not possess. AI cannot understand punishment or morality.

Q6: What’s a better alternative to full personhood?

Developing mandatory insurance programs, governing risky AI, and establishing an explicit liability regime for developers and users.

Q7: How can we ensure fairness and transparency in AI decisions?

By demanding algorithmic audits, explainable AI models, and human review, particularly where they touch sensitive domains such as criminal justice, finance, and hiring.

Q8: Is there a timeline for India’s AI regulation?

There is no set timeline at present. Yet, policy talks are on, and legislation is imminent in the near term.

Leave a Reply

Your email address will not be published. Required fields are marked *