Author : Gautam Tomar 3rd Year Bharati Vidyapeeth University Pashchim Vihar East
Abstract
Our legal, economic, and social systems now routinely include artificial intelligence (AI), which is no longer limited to science fiction or dystopian movies. Today, AI systems are not just assisting with mundane tasks but are actively drafting complex contracts, analysing and summarising voluminous judgments, conducting due diligence, predicting judicial outcomes, and even aiding in crafting legislative drafts in some jurisdictions. This article delves into the pressing question of whether AI can or should be granted legal personality, explores the rapidly evolving liability frameworks across global jurisdictions, and evaluates India’s current legal stance on regulating AI’s exponential and unstoppable growth. It concludes with practical suggestions for a well-rounded strategy that both encourages technological innovation and resolutely protects individual and constitutional rights in the AI era by critically analysing pertinent case laws, significant international policy developments, and obvious gaps in Indian regulatory readiness.
To the Point
The meteoric rise of Artificial Intelligence (AI) across critical sectors such as law, healthcare, finance, and the creative industries has triggered profound and urgent legal questions that were once the stuff of speculative fiction but are now daily boardroom and courtroom concerns:
1. Does AI have legal personhood?
Unlike corporations, which are granted juristic personality despite being artificial entities, AI systems currently lack the core attributes of consciousness, free will, and intent – elements deeply embedded in legal doctrines of liability, rights, and duties. The European Parliament once flirted with the idea of bestowing an “electronic personality” upon sophisticated autonomous AI systems to allocate liability and rights. However, this proposal encountered significant ethical, philosophical, and practical resistance, as it risked blurring the lines of human accountability while granting status to entities devoid of moral agency.
2. Who bears responsibility for AI-related harm?
This is the crucial question that might cost millions or even billions of dollars. Imagine situations where an autonomous car malfunctions and causes a deadly collision, or if an AI-powered diagnostic equipment misses early-stage cancer, leading to wrongful death. Under current Indian tort and contract law frameworks, liability generally presupposes human actors with intent or negligence. The challenge is determining whether developers, deployers, users, or even data trainers should bear the brunt of legal responsibility when AI systems act unpredictably. In response to similar dilemmas, the European Union AI Act has pioneered a risk-based regulatory framework, imposing strict liability on deployers of high-risk AI systems while mandating robust transparency and oversight mechanisms.
3. How does India regulate AI currently?
India remains at a nascent stage in AI regulation. While the National Strategy for Artificial Intelligence (2018) by NITI Aayog envisions positioning India as an AI hub under the slogan “AI for All”, it falls short of prescribing binding regulatory standards, liability provisions, or ethical safeguards. The recently enacted Digital Personal Data Protection Act, 2023 addresses certain aspects of data usage by AI systems, primarily focusing on consent and privacy. However, it leaves algorithmic transparency, bias mitigation, and accountability of AI decision-making processes largely unregulated, creating a legal vacuum that could compromise individual rights and stifle responsible innovation if left unaddressed.
Use of Legal Jargon
Juristic Person
A juristic person, also known as an artificial or legal person, is an entity recognised by law as having its own legal personality, rights, and obligations, distinct from the human beings who create or manage it. For example, companies, trusts, and registered societies are juristic persons that can sue, be sued, own property, and enter into contracts in their own name, despite lacking physical existence.
Strict Liability
Strict liability refers to a legal principle where a party is held liable for harm caused by their actions or products regardless of fault or intent. Traditionally applied in cases involving inherently dangerous activities (such as using explosives), this doctrine ensures accountability even when all reasonable care was taken. Emerging AI liability debates explore whether strict liability should apply to AI deployers for harm caused by autonomous systems.
Algorithmic Bias
Algorithmic bias denotes systematic and repeatable errors in AI outputs that unfairly discriminate against certain groups or individuals, often due to biased, incomplete, or unrepresentative training data. For example, facial recognition AI exhibiting lower accuracy for darker-skinned individuals is a classic manifestation of algorithmic bias, raising concerns under anti-discrimination and constitutional equality laws.
Electronic Personhood
Electronic personhood is a proposed legal status that seeks to grant AI systems certain limited rights, duties, and legal recognition, akin to corporate personhood. Advocates argue it streamlines liability allocation, while critics caution it may blur ethical lines by attributing personhood to entities without consciousness, intent, or moral agency.
Proximate Cause
In tort law, proximate cause refers to the primary or legally sufficient cause of harm, establishing a direct link between the defendant’s actions and the plaintiff’s injury without being too remote. Determining proximate cause is crucial in AI-related harm to assess whether developer negligence or user misuse directly resulted in damage.
Vicarious Liability
Vicarious liability is a doctrine whereby one party is held legally responsible for the wrongful acts of another, typically seen in employer-employee relationships. For instance, if an employee negligently operates machinery during work, the employer can be held vicariously liable. Similar logic is being explored to hold AI deployers liable for autonomous actions of AI systems under their control.
The Proof
Global Developments
EU AI Act (2024)
The European Union’s Artificial Intelligence Act, adopted in 2024, is a landmark legislation that establishes a risk-based classification framework for AI systems. It categorises AI into unacceptable risk, high risk, limited risk, and minimal risk categories. Notably, it prohibits social scoring AI systems, similar to China’s citizen rating model, due to inherent threats to fundamental rights and human dignity. For high-risk AI – such as those used in medical devices, employment screening, and law enforcement facial recognition – the Act imposes stringent obligations including mandatory risk assessments, data quality checks, transparency requirements, and human oversight mechanisms to ensure accountability and safety in deployment.
OECD AI Principles (2019)
The Organisation for Economic Co-operation and Development (OECD) released its AI Principles in 2019, marking one of the first internationally endorsed AI governance frameworks. These principles advocate for human-centric AI development, promoting transparency, robustness, security, and accountability while ensuring AI systems respect democratic values, human rights, and the rule of law. Although non-binding, they influence national policies across member and partner countries.
UK White Paper on AI (2023)
The United Kingdom’s AI White Paper, published in 2023, adopts a sector-specific, light-touch regulatory approach. Instead of overarching AI legislation, it proposes regulatory sandboxes within individual sectors (such as healthcare and financial services) to enable innovation while developing targeted risk-based standards. This pragmatic approach balances fostering technological growth with protecting public interest and ethical values.
Indian Scenario
NITI Aayog’s National Strategy for AI (2018)
India’s policy discourse on AI began with the National Strategy for Artificial Intelligence (2018) formulated by NITI Aayog, which positions AI as a tool for inclusive growth under its slogan “AI for All.” The strategy highlights potential applications in healthcare, agriculture, education, smart mobility, and governance. However, it largely remains a vision document, lacking concrete regulatory frameworks, enforceable standards, or liability allocation mechanisms necessary for safe AI deployment.
No Judicial Precedent on AI Liability or Personhood
At present, Indian courts have not adjudicated any matter conferring legal personhood upon AI systems nor have they pronounced rulings clarifying liability frameworks in cases of AI-induced harm. The absence of judicial precedent leaves stakeholders in a regulatory grey area, heightening the need for legislative or policy intervention to preemptively address potential legal disputes involving AI.
Case Laws
- R v. Minister for Science (Australia, 2021)
– The Federal Court held that AI cannot be recognised as an inventor under patent law, reaffirming that legal rights vest in natural/legal persons. - Toyota v. Singh (Delhi HC, 2021)
– Though unrelated directly to AI, the court reaffirmed that liability requires intent or negligence, which AI systems inherently lack, thus focusing on human actors behind AI. - Hindustan Coca Cola v. Employee Union (SC, 2004)
– Highlights vicarious liability principle, potentially extendable to AI deployers where direct AI accountability is absent. - European Parliament Resolution (2017/2103(INL))
– Proposed (but did not adopt) the creation of “electronic personality” for AI, marking a pivotal policy debate globally.
Conclusion
AI is neither human nor corporation, yet it influences human rights, economy, and justice at breakneck speed. Granting AI legal personality remains ethically premature and philosophically unsound given its lack of consciousness. However, regulatory clarity is essential. India should:
- Implement an AI Liability Law modelled on the EU AI Act with clear definitions of deployer and developer liabilities.
- Establish an AI Ethics and Regulatory Commission for algorithmic audits, akin to TRAI for telecom.
- Mandate algorithmic transparency to avoid biases that could replicate caste, gender, or class discrimination.
- Avoid premature electronic personhood recognition, focusing instead on human accountability frameworks.
Otherwise, we risk a future where an AI not only beats us at chess but also beats us in court, and worse, we won’t know whom to sue.
FAQs
- Is it legal for artificial intelligence to sue or be sued in India?.
No. Artificial intelligence has no legal persons under Indian law. Only natural or legal individuals may sue or be sued.Is AI regulation necessary in India now?
- What is electronic personhood?
A proposed legal status for AI systems giving them limited rights and liabilities, akin to corporate personhood, though currently rejected globally. - Who is liable if AI causes harm?
Under current law, liability rests on the deployer, manufacturer, or developer, depending on contractual and tortious analysis. - Does the EU recognise AI as a legal person?
No. The European Parliament rejected the electronic personality proposal, opting instead for human accountability frameworks under the EU AI Act.