Author: Deepak Kumar Gupta, United University, Prayagraj
Linkedin profile: http://linkedin.com/in/deepak-kumar-gupta-3758a6336
To the Point
Traditional legal systems hinge on human culpability—based on mens rea (guilty mind) and actus reus (guilty act). AI, however, challenges this by acting autonomously, without intentions or emotions. While machines can “cause” harm, they lack the consciousness necessary to be considered culpable under existing laws. Should we amend our laws to hold machines directly liable, or should we always trace responsibility to a human or corporate entity behind the AI?
Abstract
Artificial Intelligence (AI) has moved beyond science fiction; it is now a crucial component of sectors such as transportation, healthcare, finance, and law enforcement. With the rise of autonomous systems, the traditional legal doctrines rooted in human intent and action are increasingly inadequate. This article investigates whether AI can be considered a legal entity capable of liability, especially in criminal and civil jurisprudence. It analyses the doctrinal limitations of mens rea in the context of machines, explores current international and Indian legal positions, and critiques existing regulatory frameworks. Drawing on case laws and comparative legal studies, this article suggests a hybrid liability model where responsibility is shared between developers, users, and regulators. It advocates for statutory innovation to ensure that the law remains responsive to the realities of algorithmic decision-making.
Use of Legal Jargon
Mens Rea: A guilty mind; intention or knowledge of wrongdoing.
Actus Reus: There must be guilty conduct by the defendant.
Vicarious Liability: Legal responsibility imposed on one party for the acts of another.
Strict Liability: Responsibility without the requirement to establish blame or carelessness.
Personhood in Law: The status of being a legal person, capable of bearing rights and duties.
The Proof
Autonomous Actions: Self-driving cars have been involved in fatal accidents without human intervention. Example: Uber’s autonomous vehicle fatally struck a pedestrian in Arizona (2018).
Generative AI Risks: Deepfakes, automated disinformation campaigns, and financial frauds have been traced to AI-generated content.
Algorithmic Bias: Predictive policing tools like COMPAS have been criticized for racial bias, raising ethical and legal liability concerns.
EU Developments: The European Parliament has proposed considering AI systems as “electronic persons” under civil liability law.
India’s Legal Vacuum: Despite AI adoption, India lacks statutory provisions specifically addressing machine liability.
Case Laws
1. Ryan v. Victoria (1967) HCA 2
Jurisdiction: High Court of Australia
Summary: The court held that for liability to be imposed, there must be a clear nexus between the act and the person responsible, supported by foreseeability and control.
Relevance: This becomes problematic with AI, especially when the decision-making process is autonomous and opaque. Foreseeability is hard to prove when machine learning systems evolve beyond their original programming.
2. United States v. Athlone Industries, Inc., 1984
Jurisdiction: U.S. Court of Appeals
Summary: The case involved the imposition of criminal liability on a corporation even in the absence of direct human culpability.
Relevance: It opens the door for non-human actors (like AI systems operated by corporations) to be held liable under constructive or vicarious theories. This lays foundational logic for AI liability through corporate personhood.
3. Uber Autonomous Vehicle Case (Arizona, 2018)
Jurisdiction: United States (No formal judgment rendered yet)
Summary: A self-driving car from Uber hit and killed a pedestrian. The backup driver was subsequently charged with negligent manslaughter.
Relevance: This case is pivotal in questioning whether liability lies with the software, the human safety driver, or the corporation. The event underscored the lack of laws tailored for AI.
4. Brown v. Kendall, 60 Mass. 292 (1850)
Jurisdiction: Supreme Judicial Court of Massachusetts
Summary: One of the earliest American cases that established fault-based negligence as a cornerstone of tort law.
Relevance: With AI, the concept of the “reasonable person” standard grows ambiguous. Can we expect AI to behave like a reasonable human being? This case lays the philosophical tension.
5. Bhavesh Jayanti Lakhani v. State of Maharashtra, (2009) 9 SCC 551
Jurisdiction: Supreme Court of India
Summary: The Court held that criminal liability requires the establishment of mens rea—the intention to commit a crime.
Relevance: AI lacks consciousness and cannot form intention. Hence, under Indian Penal Code (IPC), AI cannot be criminally prosecuted, exposing a legislative gap.
6. India v. Union Carbide Corporation (Bhopal Gas Tragedy), AIR 1989 SC 119
Jurisdiction: Supreme Court of India
Summary: Though this case didn’t involve AI, it established the absolute liability principle for hazardous industries.
Relevance: This doctrine may be analogically applied to AI developers who unleash powerful, potentially harmful algorithms into society.
7. Calcutta H.C in Tata Motors Ltd. v. State of West Bengal (2023)
Jurisdiction: Calcutta High Court
Summary: Though not directly related to AI, the court discussed the obligations of corporations in deploying technologies that affect public life.
Relevance: The ruling hints at emerging judicial willingness to scrutinize technological risks even without express legislative provisions.
Conclusion
The integration of AI into everyday decision-making challenges the very foundations of traditional legal principles. While machines can now act autonomously, they do not possess the mental faculties required for criminal intent. Hence, attributing criminal liability to AI remains doctrinally and morally untenable under current legal frameworks.
However, this legal vacuum must not lead to impunity. A multi-tiered liability model is required:
Strict Liability for manufacturers and developers: If the AI system causes harm, they should be held accountable, irrespective of negligence.
Vicarious Liability for users or deploying agencies: Especially when AI acts under human supervision or within institutional frameworks.
Statutory Oversight: India needs comprehensive legislation defining AI responsibilities, transparency standards, and accountability channels.
Regulatory Sandbox: Like the Reserve Bank of India’s model, regulatory testing environments for AI can be institutionalized to observe outcomes before full-scale deployment.
Judicial doctrines such as res ipsa loquitur (the thing speaks for itself) and precautionary principle may also evolve to address algorithmic harm. Until AI achieves legal personhood or sentience (if ever), the responsibility must lie with human actors—be it developers, deployers, or institutions.
In the long run, AI should not be scapegoated as a “rogue agent” nor treated as a disembodied force. It is a product of human design, and thus, human laws must continue to govern it with creativity, responsibility, and fairness.
FAQS
Q1. Can an AI or a robot be prosecuted under criminal law in India?
Answer:
No, under the current Indian legal framework, AI systems cannot be prosecuted under criminal law. The Indian Penal Code (IPC) requires two essential elements for criminal liability: actus reus (the wrongful act) and mens rea (the guilty mind). Since AI lacks consciousness, intent, and moral judgment, it cannot form mens rea—a prerequisite for most criminal offences. Therefore, machines, regardless of their level of autonomy, cannot be held criminally liable.
Q2. Who is liable if an AI system causes harm or injury—like a self-driving car accident or a financial algorithm malfunction?
Answer:
Liability in such cases is usually attributed to one or more of the following parties:
Developers or programmers (for flawed algorithm design)
Manufacturers (for defects in hardware or safety mechanisms)
Deployers or Users (for negligent use or lack of supervision)
Corporations or Institutions (under the doctrine of vicarious liability)
Depending on the nature of harm, liability may be civil (e.g., tort or consumer protection) or criminal (e.g., negligent homicide—applicable only to humans or corporations, not AI).
Q3. Is there any international precedent or law that gives legal status to AI systems?
Answer:
No country has yet conferred full legal personhood on AI. However:
The European Parliament (2017) proposed the creation of “electronic personhood” for the most sophisticated autonomous systems, particularly for civil liability.
Saudi Arabia made headlines for granting “citizenship” to the humanoid robot Sophia in 2017, but this act had no legal force or precedent and was symbolic.
While some scholars advocate limited legal personality for AI (similar to corporations), this remains highly contested.
Q4. What happens in cases where AI systems commit actions that would be considered crimes if done by humans, such as deepfake blackmail, market manipulation, or autonomous drone attacks?
Answer:
In such instances, liability is typically traced back to the:
Developer (if the AI was trained or coded to behave unethically)
Operator or Deployer (if they used AI for malicious purposes)
Corporation or Government (in case of institutional use)
AI is treated as a tool, not an actor. The law seeks a responsible human or legal person who enabled or failed to prevent the outcome. In some jurisdictions, product liability and cybercrime laws are invoked.
Q5. Can AI be held liable under civil law, like torts or contracts?
Answer:
AI cannot be held liable directly, but courts may use civil doctrines to impose indirect liability:
Negligence: For harm caused by flawed programming or inadequate testing.
Strict Liability: Even if there’s no intent or fault, manufacturers can be liable for dangerous AI applications.
Breach of Contract: If AI acts unpredictably in automated transactions, the party deploying the AI may be held responsible.
This field is evolving, and some scholars suggest developing AI-specific tort doctrines.
Q6. Could India develop a separate legal framework for AI accountability?
Answer:
Yes, and such a framework is urgently needed. As of 2025, India does not have a comprehensive legal regime for AI regulation or liability. However, initiatives are underway:
The Ministry of Electronics and Information Technology (MeitY) has released discussion papers on ethical AI.
A Personal Data Protection Bill (2023) touches on algorithmic decision-making but lacks liability clauses.
A future framework should include:
Definition of AI categories (narrow, general, autonomous)
Civil and criminal liability allocation
Algorithmic transparency and audits
AI risk rating systems
Regulatory sandbox testing before deployment
Q7. What is the “Black Box Problem” in AI and how does it affect legal accountability?
Answer:
The Black Box Problem refers to the opaque nature of many AI systems, especially those based on deep learning. These systems make decisions in ways even their creators cannot fully understand or trace.
Implications for law:
Difficulty in proving causation or foreseeability.
Challenges in establishing negligence.
Undermines principles like natural justice and right to explanation.
Legal systems must demand explainable AI (XAI) and transparency by design to ensure that AI decisions are reviewable.
Q8. Are there any Indian court judgments related to AI liability?
Answer:
As of now, Indian courts have not issued landmark rulings specifically on AI liability, but several cases have touched on technology and corporate accountability. For instance:
Bhavesh Jayanti Lakhani v. State of Maharashtra (2009) reiterated the importance of mens rea in criminal law.
Shreya Singhal v. Union of India (2015) discussed intermediary liability in cyberspace, relevant for AI platforms.
Courts have also issued guidelines on facial recognition, biometric data, and automated surveillance, laying the groundwork for future AI regulation.
Q9. Could AI ever have rights or duties under law—like a person or a company?
Answer:
This is a philosophical and legal debate still in progress. Some argue for limited legal personhood (like corporations), allowing AI to:
Own property
Enter contracts
Bear liability
Opponents argue that AI lacks consciousness, moral agency, and cannot be deterred or punished—thus disqualifying it from legal personality.
As of now, AI is not a legal person anywhere in the world.
Q10. What reforms or safeguards can ensure responsible AI deployment in India?
Answer:
Here are some key reforms India could adopt:
Enact a Comprehensive AI Liability Act
Mandate transparency and risk classification for all AI systems
Require human-in-the-loop oversight for high-risk applications
Establish an AI Ethics Commission to regulate misuse
Include AI literacy and awareness programs for judiciary and enforcement agencies
Create compensation mechanisms for victims of AI-related harm
Promote open audits and explainability standards for critical algorithms
These measures can balance innovation with accountability and ensure AI benefits society without compromising justice or safety.