Legal Personality of AI and Liability for Autonomous Decisions


Author: Mansi R. Jadhav, Shahaji Law College, Kolhapur

To the Point


Artificial Intelligence is no longer just something used in labs or software experiments it is now a part of our daily lives, influencing governance, business, healthcare, and transportation. From AI tools that help screen job applicants to self-driving cars making quick decisions on the road, machines are now doing tasks that once only humans could do. But while AI is advancing fast, our legal system is struggling to keep up.

When an AI system causes harm, should we keep holding the humans behind it responsible, or should we start treating the AI as legally responsible in its own right?
This article explores a critical and still unresolved question: Should AI be given a legal identity like a person? And if not, how can we fairly assign responsibility for the decisions it makes on its own? India’s current legal system is still focused on human responsibility, but the growing independence of machines is forcing us to rethink how the law should respond.

Use of Legal Jargon


Legal Personhood: The capacity of an entity whether human, corporate, or artificial to hold rights and duties.


Strict Liability: A liability model where fault is irrelevant if harm results from inherently risky activity.


Mens Rea: The ‘guilty mind’ required to establish criminal intent—virtually inapplicable to AI systems.


Vicarious Liability: Responsibility imposed on one party for the actions of another, typically in hierarchical relationships.


Electronic Personhood: A proposed category of legal recognition for highly autonomous AI agents, with limited responsibilities.

The Proof


What makes AI different is not just that it can automate tasks, but that it can make decisions on its own. This ability to act independently creates a challenge for the legal system because key legal ideas like intent, predictability, and control become unclear. For instance, imagine a predictive policing tool powered by AI that wrongly flags an innocent person. Should we blame the programmer for bias? The person who trained the data? Or the AI system itself that learned and made decisions on its own?
So far, Indian courts have not directly dealt with this issue. Right now, our courts use existing laws like product liability or negligence, treating AI as a tool. In these models, the idea is that a human is always supervising the AI. But this belief no longer holds true, especially with advanced AI systems that learn and update themselves through deep learning.


Take AI tools that are constantly learning from real-world data like traffic patterns or customer behaviour. These systems start making decisions independently. But our current tort laws are based on the idea of fixed causes and blame, which doesn’t fit well with AI’s constantly evolving nature. A 2023 paper by the Meta Legal Centre explains this gap perfectly, stating that “liability without legal identity is fast becoming legally untenable.” This is a big concern for India, where tech rules are still developing and not fully prepared for such complex situations.


Internationally, the European Parliament suggested in 2017 that highly autonomous AI should be given “electronic personhood.” Although this proposal wasn’t passed into law, it opened up a key discussion: if we already treat non-human things like companies and ships as legal persons, why not give similar treatment to AI systems that act just as independently?
That said, giving AI full legal status in India would face serious challenges. Indian laws are built around the idea that only humans can be responsible. Even when we hold companies accountable, it still ultimately connects to the people in charge. So, the real task is to create a legal system that recognizes AI’s growing independence without removing the accountability of the humans behind it.

Abstract


As Artificial Intelligence systems grow in autonomy and decision-making authority, they expose structural gaps in legal liability doctrines. This article examines the feasibility of recognizing AI as a legal person under Indian law and critiques the limitations of current doctrines like negligence, product liability, and vicarious fault. By comparing international approaches and identifying legislative silences in India, it proposes a hybrid model of limited AI legal personhood backed by mandatory insurance, algorithmic audits, and regulatory oversight. The aim is not to anthropomorphize AI, but to contain its legal disruption through smart lawmaking.

Case Laws


1. Tesla Autopilot Crash (California, 2021)
In 2021, a self-driving Tesla operating in Autopilot mode was involved in a fatal crash in California. The car was being controlled by Tesla’s AI system, not a human driver at the time of the accident.
Even though the AI made the driving decisions, the legal case was filed against Tesla, the manufacturer not against the AI system. This shows that under current laws, AI is not treated as a legal person, even when it acts independently. The case clearly highlights a major gap in existing legal systems: Who should be held responsible when an AI system causes harm?


2. Uber Spain – ECJ Case C‑434/15
In this case, the European Court of Justice (ECJ) looked at how Uber operates through its mobile app and algorithms. Uber used AI-based systems to assign drivers, decide prices, and manage services without direct human involvement.
The Court ruled that Uber was not just a digital platform but a transportation service provider and could be held responsible for decisions made by its algorithms. This judgment was important because it showed that companies cannot hide behind AI systems or claim they are not liable just because a machine made the decision. It set a legal precedent for holding companies accountable for AI-driven outcomes.


Conclusion


The rise of autonomous AI challenges the foundations of our legal system. As AI begins to make decisions that affect real-world outcomes often without human involvement our current liability models, rooted in human intent and foreseeability, become increasingly outdated. The question is not simply whether AI should be treated as a legal person, but whether our legal tools are prepared to handle a non-human decision-maker with real-world consequences.
Full legal personhood for AI may still be premature, especially in India where jurisprudence remains deeply anthropocentric. But continuing to treat advanced AI as mere property also fails to address accountability gaps. The solution lies in recognizing the functional autonomy of AI systems and adapting liability frameworks accordingly.
India must begin by strengthening accountability through existing human channels, developers, deployers, and manufacturers while simultaneously exploring whether certain high-autonomy systems should be assigned a limited legal identity, not for moral recognition, but for clear legal responsibility. This can be supported through measures like mandatory insurance, algorithmic audits, and sector-specific regulation.
Ultimately, the law must not fall behind the machine. As AI grows in autonomy, the legal system must evolve not by copying foreign models, but by crafting Indian solutions rooted in constitutional values of justice, fairness, and accountability. Recognizing liability in the age of autonomous decisions is not just a legal necessity it is a democratic one.

FAQS


Q1. Is legal personality for AI recognized in India?
As of now, Artificial Intelligence has not been granted the status of legal person under Indian law. All liability is traced back to human or corporate actors.


Q2. Why is current liability law inadequate for AI?
Traditional doctrines like negligence or product liability assume human intent or foreseeability, which fail in AI’s autonomous decision-making context.


Q3. Can AI be criminally prosecuted?
No. Criminal law hinges on mens rea (guilty intent), which AI lacks. Only civil liability can be creatively applied.


Q4. What is the global stance on AI legal personhood?
While the EU proposed “electronic personhood,” no country has enacted such status yet. However, insurance-backed AI liability is gaining ground.


Q5. What should India’s legal response be?
India should legislate a graded liability framework with scope for limited AI personhood, ensuring compensation without diluting accountability.

Leave a Reply

Your email address will not be published. Required fields are marked *