Site icon Lawful Legal

ARTIFICIAL INTELLIGENCE AND LIABILITY LAWS

                      

Introduction

Artificial intelligence (AI) has rapidly evolved, integrating into various sectors such as healthcare, finance, transportation, and more. With its growth, the legal landscape surrounding AI, particularly liability law, has become increasingly complex. This complexity arises from the unique nature of AI systems, which can operate autonomously and make decisions without direct human intervention.

 The intricate relationship between Artificial Intelligence (AI) and liability laws. In the dynamic landscape of AI deployment, questions of responsibility arise in cases of unintended outcomes. The abstract touches on three key dimensions of liability: manufacturer responsibility, user accountability, and the emerging concept of AI system liability. It emphasizes the global challenge of adapting legal frameworks to accommodate AI complexities and highlights the necessity for interdisciplinary collaboration to strike a balance between innovation and ethical safeguards. Liability in AI can be categorized into three main areas: manufacturer liability, user liability, and AI system liability. Manufacturer liability entails holding the creators of AI systems responsible for any harm caused. This prompts discussions about the transparency and accountability of AI developers in ensuring their creations adhere to ethical standards.

Types of Liability in AI

  1. Product Liability: AI systems can be considered products, making manufacturers liable for defects. This includes design defects, manufacturing defects, and failure to warn users about potential risks.
  2. Negligence: Developers and operators of AI systems can be held liable if they fail to exercise reasonable care in the design, development, or deployment of AI systems.
  3. Strict Liability: Some jurisdictions may impose strict liability, holding parties responsible for damages caused by AI systems regardless of fault.

Key Legal Challenges

The challenges and complexities surrounding AI bring to light the need for interdisciplinary collaboration in navigating uncharted waters. As AI technologies advance, ethical, legal, and societal considerations become increasingly intricate. Addressing issues such as bias, privacy concerns, and potential job displacement requires expertise from diverse fields, including technology, ethics, law, and sociology. Interdisciplinary collaboration fosters a holistic understanding of the multifaceted challenges posed by AI, allowing for more comprehensive and nuanced solutions. Bringing together experts from different domains encourages the development of ethical frameworks, regulatory policies, and technological safeguards that can effectively balance innovation with responsible deployment. In this evolving landscape, collaboration across disciplines emerges as an essential strategy to chart a course toward the ethical and sustainable integration of AI into society.

  1. Autonomy and Decision-Making: AI systems can make decisions independently, complicating the attribution of liability. Determining who is responsible for an AI’s actions—developers, operators, or users—remains a significant challenge.
  2. Causation: Establishing a direct causal link between an AI system’s actions and the resulting harm can be difficult, especially when multiple factors contribute to the outcome.
  3. Transparency and Explainability: AI systems, particularly those using deep learning, often operate as “black boxes,” making it hard to understand how decisions are made. This lack of transparency can hinder legal proceedings.

Legal Frameworks and Approaches

  1. Regulatory Approaches: Various jurisdictions are developing regulations to address AI liability. For example, the European Union’s AI Act aims to ensure AI systems are safe, transparent, and non-discriminatory.
  2. Insurance and Risk Management: Insurance can play a crucial role in managing AI-related risks. Policies can be tailored to cover specific AI applications and potential liabilities.
  3. Case Law: Courts are beginning to address AI-related cases, setting precedents that will shape future liability frameworks. For instance, cases involving autonomous vehicles have highlighted the need for clear liability rules.

AI system liability

The concept of AI system liability poses a profound question at the intersection of technology and law, seeking to unravel whether AI should be treated as a legal entity or merely as a tool. As AI systems evolve in complexity and autonomy, discussions arise about attributing legal responsibility to these entities. Some argue that AI should be recognized as a legal entity with rights and obligations, while others contend that they should remain tools, with liability resting 

on users or creators. This debate challenges traditional legal frameworks, sparking inquiries into the ethical and legal implications of treating AI systems as autonomous entities.

Striking a balance between technological innovation and legal accountability is essential as societies grapple with the implications of granting or denying legal personhood to AI systems. The evolving concept of AI system liability raises fundamental questions about whether these intelligent systems should be treated as legal entities or mere tools.



As legal frameworks grapple with defining the responsibilities of AI entities, there are limited case laws directly addressing this intricate issue. One notable example is the 2018 incident involving an autonomous vehicle causing a fatal accident. In this case, the legal focus shifted between the user, the manufacturer, and the AI system itself. 

While not establishing AI as a legal entity, the case underscored the need for nuanced liability frameworks that consider the unique characteristics of AI. The absence of comprehensive case law highlights the ongoing challenge of adapting legal systems to the rapidly advancing landscape of artificial intelligence. As AI technology continues to progress, legal precedents will likely emerge, shaping the future discourse on AI system liability.

Corporate criminal liability

Currently, there is no system to hold these machines responsible for their acts. Consequently, businesses have free reign to take risks and use these systems at the expense of the general society. The doctrine of corporate criminal liability offers a resolution to that.

Corresponding to the concept of strict liability is corporate criminal liability. Strict corporate liability applies to cases in which corporations are performing an inherently dangerous activity. There is knowledge of the risk involved, and the entire corporation is blamed for its consequences, if it causes harm to society. 

This doctrine gives corporations the status of legal persons. With it, the corporation is assigned obligations as well as liabilities. This model uses organisational blame to incentivise businesses to take reasonable care and precaution in their experimentations. 

In India, corporations are recognized as juristic persons. The Supreme Court in Standard Chartered Bank v Directorate of Enforcement (2006) held that corporations can be liable for acts committed by it. While corporal punishment like imprisonment cannot be meted out to this juristic person, corporates would be liable for a hefty fine. 

However, there is one significant drawback to this model. Victims of crimes by AI systems would face the costs of suing corporations with significant power and often, in foreign countries. This might end up making justice inaccessible for them.

Civil liability 

Usually, in a case, where a party is injured and can be compensated for the damage caused due to software, the recourse of criminal liability is not chosen. Instead, the tort of negligence is the path taken. Three elements to constitute negligence are the defendant’s duty to care, breach of such duty and injury caused to the plaintiff due to such a breach. The maker of the software has a duty towards his customer to maintain the standards of care prescribed or could face legal proceedings for various reasons such as;

  1. Developer’s failure to detect errors in program features and functions, 
  2. An inappropriate or insufficient knowledge base,
  3. Inappropriate or insufficient documentation and notices, 
  4. Failure to maintain an up-to-date knowledge base,
  5. ​​Error due to user’s faulty input, 
  6. Excessive reliance of the user on the output,
  7. Misusing the program.

Ethical Considerations

  1. Moral Responsibility: Beyond legal liability, there are ethical questions about who should be held morally responsible for AI’s actions. This includes considerations of fairness, accountability, and the potential for bias in AI systems.
  2. Human Oversight: Ensuring that AI systems operate under appropriate human oversight can mitigate risks and enhance accountability.

Position in India

One can say that the existing regulatory framework at the national and international level for AI systems is inadequate to address the various ethical and legal issues concerning it. Discussed below is the relevant framework that is present in India in the context of ascertaining the liability and rights of AI systems.

The Constitution of India

Under Article 21 of the Constitution, the ‘right to life and personal liberty’ has been interpreted by the Indian judiciary to include within its ambit several fundamental and indispensable aspects of human life. In the leading case of R Rajagopal v. State of Tamil Nadu, the right to privacy was held to be implicit under Article 21 and is relevant with addressing privacy issues arising out of AI in processing personal data. Further, in the landmark case of  K.S.  Puttu- swamy v.  Union of India,  the  Supreme Court emphasised the need for a comprehensive legislative framework for data protection, which shall be competent to govern emerging issues such as the use of AI in India.  AI may also be unfair and discriminatory and will attract Articles 14 and 15, which deal with the right to equality and right against discrimination respectively to protect the fundamental rights of the citizens. 

The Patents Act, 1970

Patentability of AI, inventorship (true and first owner), ownership and liability for AI’s acts/omissions, are some of the main issues under this Act with regard to AI. Section  6  read with  Section  2(1)(y)  of the  Act does not specifically mandate that  ‘person’  must be a  natural person, although that is conventionally understood or assumed to be so. At present, AI has not been granted legal personhood and would not fall within the scope of the act. 

The Personal Data Protection Bill, 2019

The processing of personal data of Indian citizens by public and private bodies located within and outside India is regulated in this bill. It emphasizes upon ‘consent’ for processing of such data-by-data fiduciaries, subject to certain exemptions.  When enacted into law, this bill will affect the wide application of AI software that collects user information from various online sources to track user habits relating to purchase, online content, finance etc.

The Information Technology Act, 2000

Section 43A  of the Information Technology Act, 2000 imposes liability on a body corporate, dealing with sensitive personal data, to pay compensation when it fails to adhere to reasonable security practices.  This has a significant bearing in determining the liability of a body corporate when it employs AI to store and process sensitive personal data.

The Consumer Protection Act, 2019

Section 83  of the Consumer Protection Act, 2019 entitles a complainant to bring an action against a manufacturer or service provider or seller of a product, as the case may be, for any harm caused to him on account of a defective product.  This establishes liability for the manufacturer/seller of an AI entity for harm caused by it.

Tort Law

The principles of vicarious liability and strict liability are relevant to the determination of liability for wrongful acts or omissions of AI. In the case of Harish Chandra v. Emperor,  the court laid down that there is no vicarious liability in criminal law as one’s wrongful acts if the AI entity may be considered as an agent. 

Conclusion

The intersection of AI and liability law is a dynamic and evolving field. As AI technology continues to advance, legal frameworks must adapt to address the unique challenges it presents. This includes developing clear regulations, enhancing transparency, and ensuring that liability is appropriately assigned to foster innovation while protecting public safety. the ongoing evolution of AI liability laws signifies a pivotal juncture in our technological journey, demanding careful consideration and forward-thinking solutions. As AI continues to weave itself into the fabric of our daily lives, the dynamic nature of liability laws reflects a continuous effort to strike a delicate balance. Balancing innovation with accountability, the path forward involves not only refining existing legal frameworks but also fostering global discipline.

The journey toward shaping ethical AI practices necessitates collaboration between policymakers, technologists, legal experts, and ethicists. Recognizing the challenges and complexities inherent in AI systems, the pursuit of a path forward entails a commitment to transparency, responsible development, and a proactive approach to address emerging issues. As societies grapple with defining the parameters of accountability, the trajectory of AI liability laws will likely be characterized by adaptability and a commitment to upholding ethical standards. Navigating this path requires a collective effort to ensure that AI not only propels innovation but also aligns with our values and principles, ultimately contributing to a future where technology and responsibility walk hand in hand.

FAQ

Author: KIRTI RAJ, COLLEGE: HEMVATI NANDAN BAHUGUNA GARHWAL UNIVERSITY, TEHRI, SRT CAMPUS, UTTRAKHAND 249199 

Exit mobile version