Site icon Lawful Legal

Navigating AI Liability: How have Emerging Laws Tried to Hold Machines Accountable

Author: Piyush Shenoy, St. Aloysius (Deemed to be University), School of Law

To the Point

 We have seen that In the recent years, lawmakers and courts around the world have attempted to regulate artificial intelligence (AI) systems by identifying who could be held responsible when things essentially went wrong. Legal reforms, especially between 2023 and 2025, showed us that AI-related harm cannot be overlooked as a policy gap anymore. Jurisdictions such as the European Union, India, and certain U.S. states began shifting liability from the software to those deploying or designing it. Courts increasingly evaluated negligence, intent, and duty of care while handling claims where AI systems had caused real-world consequences. The efforts to hold AI accountable marked a turning point in the evolution of tort law, product liability, and administrative governance.

Use of Legal Jargon

The Proof

In 2024, the European Union passed the Artificial Intelligence Act, which classified AI systems into different risk categories. High-risk systems, such as facial recognition in public spaces or algorithmic decision-making in finance, were subject to strict regulatory scrutiny. Developers and deployers of these tools were required to maintain records, ensure accuracy, and provide explanations for automated outcomes. Non-compliance attracted both civil and administrative penalties.

India’s Ministry of Electronics and Information Technology (MeitY) released its Draft National Strategy on AI Regulation in late 2024. It proposed mandatory algorithm audits, impact assessments, and traceability requirements for AI systems used in sensitive areas like health care and law enforcement. Liability was distributed among developers, data controllers, and vendors.

In the United States, a combination of federal agency guidelines and state laws filled the regulatory space. The Federal Trade Commission (FTC) released new rules mandating disclosure obligations for firms deploying AI in consumer-facing services. Meanwhile, California enacted a state-level AI Liability Act that introduced strict liability for companies deploying autonomous systems that led to physical or economic harm.

In several lawsuits, courts began recognizing the need to assess fault even in the absence of human action. Where victims showed that an AI tool led to misdiagnosis, wrongful arrest, or denial of service, the burden shifted to developers to prove reasonable precautions had been taken. This principle was applied in multiple civil claims in India, the EU, and California.

Abstract

By 2025, several jurisdictions had taken active steps to bring AI under the scope of legal liability. These regulations primarily targeted high-risk AI applications and aimed to assign responsibility when harm was caused. Legal tools such as strict liability, algorithmic audits, and traceability records were introduced to ensure accountability. Courts, too, shifted towards holding developers and deployers answerable when AI decisions produced serious negative outcomes. While the laws were still evolving, these interventions were a critical first step toward integrating AI governance into legal frameworks.

Case Laws

Conclusion

The legal landscape around AI accountability had significantly matured between 2023 and 2025. Multiple jurisdictions began to adopt a proactive stance by imposing compliance obligations on developers, operators, and even data providers. These steps ensured that victims of AI errors were not left remediless. Going forward, it is expected that future cases will refine liability doctrines further. The role of courts in interpreting intent, causation, and risk in AI matters will remain central to the evolution of global AI law.

FAQs

  1. Can AI be sued directly?
    No. AI systems are not legal persons and cannot be sued. Liability is assigned to their creators or users.
  2. What is the role of the Black Box problem in liability cases?
    Courts often treat lack of explainability as evidence of negligence, especially if harm occurs.
  3. Are there any AI-specific laws in India?
    As of 2025, India was in the process of formalizing a national AI regulatory framework through MeitY.
  4. Can AI developers be criminally liable?
    Criminal liability is rare and usually applies only if intent, gross negligence, or recklessness can be proven.
  5. What are algorithmic audits?
    They are structured evaluations of AI systems to check for bias, accuracy, and compliance with legal standards.
Exit mobile version