The Legal Personality of AI: Can Machines Have Rights and Duties?

AUTHOR: ISHITA SETHA


TO THE POINT
Legal Personality means the capacity to have rights and duties under the law. Traditionally, this is granted to natural persons (humans) and legal persons (e.g., corporations).

• AI lacks consciousness, intentionality, and moral judgment, which are pivotal to bearing legal responsibility.

• Currently, no legal system directly recognizes AI as a legal person. AI is treated as a device or product, and liability falls on developers, users, or owners.

• Debate: Some scholars propose a form of “electronic personhood” for advanced AI, especially for autonomous systems making decisions without human input.

• Key Questions:

– Who is responsible if AI causes damage?
– Can AI own property or enter contracts?
– Can AI be punished or held liable?

• EU Perspective: The European Parliament has considered legal personhood for AI but did not adopt it, preferring clear human responsibility. As of now, AI does not have legal personality. Legal systems prioritize human accountability over granting machines rights and duties.

THE LEGAL JARGON
1. Legal Personhood – The status of being a holder of rights and duties under the law.
2. Natural Person – A human being with legal capacity from birth to death.
3. Artificial Legal Person – Non-human entities (like corporations) recognized by law as having rights and liabilities.
4. Fiction Theory – The legal theory under which legal entities are granted personality.
5. Autonomous Systems – AI systems capable of independent decision-making with minimal human intervention.
6. Strict Liability – Legal responsibility that does not hinge on negligence or intent to harm (often discussed in AI cases).
7. Mens Rea – “Guilty mind”; refers to intent in criminal law — AI lacks this, raising accountability issues.

THE PROOF
As of now, no legal system in the world grants artificial intelligence (AI) the status of a legal person. The European Parliament, in its 2017 resolution on Civil Law Rules on Robotics (2015/2103(INL)), considered assigning “electronic personhood” to highly autonomous AI systems. However, this proposal was ultimately rejected, with lawmakers emphasizing the need to preserve clear human responsibility and avoid ethical and legal ambiguity. The UNESCO Recommendation on the Ethics of Artificial Intelligence (2021) strongly states that AI systems must not be treated as legal persons, and that liability must always remain with human agents. In India, the NITI Aayog’s National Strategy for AI (2018) echoes this position, stating that AI should be considered a tool to support humans and not as an entity capable of holding rights or liabilities. Prominent legal scholars such as Ugo Pagallo and Joanna Bryson argue against AI personhood, noting that AI lacks the essential attributes of legal subjectivity such as consciousness, moral judgment, and the capacity to be held responsible.

ABSTRACT
The rapid development of artificial intelligence (AI) has raised critical questions about its place in legal systems, particularly whether AI can or should be granted legal personality — the capacity to hold rights and duties under the law. This article explores the current legal, ethical, and theoretical landscape surrounding the recognition of AI as a legal person. While AI systems have demonstrated increasing levels of autonomy and decision-making capability, they lack consciousness, intent, and moral judgment — key elements traditionally required for legal personhood. Existing legal frameworks worldwide, including those in the European Union, India, and under UNESCO’s guidelines, maintain that AI should be treated as a tool or product, with responsibility resting solely on human actors such as developers, users, or institutions. The proposal of “electronic personhood” for AI, though discussed in legislative and academic circles, has been widely criticized and ultimately rejected due to concerns about diluting human accountability and creating legal uncertainty.

CASE LAWS
While there are no direct cases where AI has been treated as a legal person, the following cases illustrate principles of personhood, liability, and responsibility:

1. United States v. Athlone Industries, Inc., 746 F.2d 977 (3rd Cir. 1984)
   • Principle: Only entities recognized as legal persons can be parties to litigation.
   • Relevance: AI, not being a legal person, cannot sue or be sued.

2. Salomon v. Salomon & Co. Ltd. (1897) AC 22 (UK)
   • Principle: Established the concept of corporate legal personality.
   • Relevance: Highlights that legal personality is a legal construct that could theoretically be extended to AI, though it has not been.

CONCLUSION
In conclusion, AI cannot currently be considered a legal person under any existing legal framework. Although theoretical discussions around “electronic personhood” exist, no jurisdiction has implemented such a status. Both international and national bodies, including the European Union, UNESCO, and India’s NITI Aayog, emphasize the necessity of human accountability in AI-related decisions. Legal personhood requires attributes such as intentionality, consciousness, and the capacity to bear moral and legal responsibility — qualities AI does not possess. Therefore, under present legal principles, AI cannot hold rights or duties, and all liabilities remain with human actors such as developers, operators, and owners.

FAQs
1. Can AI be sued in court?
No. AI is not a legal person and cannot be a party in legal proceedings. Liability rests with the owner, user, or developer.

2. What is electronic personhood?
It is a proposed concept where advanced AI systems could be granted limited legal status similar to corporations. However, this has been widely rejected or avoided in practice.

3. Does any country recognize AI as a person?
No. As of now, no country grants AI legal personality or independent legal rights.

4. Who is liable if AI causes damage or injury?
The human developers, manufacturers, users, or owners are held liable under current laws. Doctrines such as vicarious liability or strict liability apply.

5. Can AI enter contracts or own property?
No. AI lacks the legal capacity and intention (mens rea or contractual intent) required for such activities.

Leave a Reply

Your email address will not be published. Required fields are marked *