AI on Trial

Author: Pranav Kumar, National Law University, Assam

Abstract
Artificial Intelligence (AI) has evolved from a futuristic idea to a fundamental aspect of modern life, impacting everything from criminal justice and economics to healthcare and politics. However, the rapid development of AI technologies has significantly outpaced the creation of robust legal frameworks to govern them. In order to address urgent issues including accountability, liability, ethical bias, and data protection, this essay examines the relationship between AI and the law, with a special emphasis on the Indian legal system. Through legal analysis and comparative insights, the article investigates whether existing jurisprudential principles can accommodate the challenges posed by AI or if a novel legal paradigm is required. It further analyses emerging case law and regulatory trends to suggest a forward-looking legal response.

Exploring Legal Accountability and Ethical Dilemmas in the Age of Artificial Intelligence

Artificial Intelligence today is no longer a tool; it has transformed into a quasi-autonomous entity with decision-making capabilities. Its deployment across sensitive domains like facial recognition in law enforcement, predictive analytics in judicial sentencing, and algorithmic trading in financial markets raises fundamental legal questions. Chief among them: Who bears responsibility when AI systems malfunction or produce biased outcomes? Can existing legal doctrines accommodate non-human agents? These are the main queries in an evolving conversation.

This new reality is causing the Indian legal system to struggle. India currently has no comprehensive legislation governing AI; instead, it relies on a patchwork of laws, including the Information Technology Act of 2000, and court interpretations of constitutional rights. On the other hand, the AI Act, a structured framework that assigns commensurate obligations and classifies AI applications according to risk, has been proposed by the European Union. There is a great deal of space for ambiguity and rights violations in the Indian method, which is mostly reactive and unstructured.

A key issue lies in the application of traditional legal doctrines to AI. The doctrine of vicarious liability, for instance, which attributes responsibility to an employer for the acts of their employee, is insufficient in cases where AI systems act beyond their initial programming or adapt through machine learning. Similarly, the concept of mens rea or a “guilty mind,” foundational to criminal liability, cannot be imputed to an AI entity that lacks consciousness or intent.

Consider the hypothetical case of an autonomous vehicle involved in a fatal accident. Is the manufacturer responsible if the AI system made a snap choice that resulted in a casualty? 

The programmer? The end user? Without clear legislative guidance, courts will be forced to stretch existing doctrines, potentially leading to inconsistent outcomes.

The jurisprudential basis for AI regulation can be traced to landmark constitutional cases that, while not directly addressing AI, provide foundational principles. The Supreme Court upheld the right to privacy as a fundamental component of Article 21 of the Constitution in the 2017 case of Justice K.S. Puttaswamy (Retd.) v. Union of India. The decision emphasized informational sovereignty and data protection as essential elements of individual liberty. This precedent becomes crucial in the context of artificial intelligence when dealing with concerns about data mining algorithms and surveillance technologies that gather and examine personal data without the express agreement of the subjects.

Further, the Puttaswamy judgment laid down the doctrine of proportionality, requiring that any intrusion into privacy must be justified by a legitimate state interest, be necessary, and be the least restrictive means. This principle is critical when evaluating AI systems used in governance—such as predictive policing or social credit scoring—which risk enabling mass surveillance without adequate safeguards.

Another case, K.S. v. Union of India (commonly known as the Aadhaar judgment), though focused on biometric data, raised concerns about centralization of personal information and the risks of profiling. While not AI-specific, the judgment’s reasoning applies directly to algorithmic governance tools that rely on large datasets to make inferences about individuals. Without transparency and accountability, such systems could institutionalize bias and erode individual rights.

The issue of algorithmic bias is not theoretical. In the United States, the use of the COMPAS algorithm in judicial sentencing revealed systemic racial bias, sparking a broader debate about fairness in automated decisions. India may soon face similar controversies as public and private institutions increasingly rely on AI systems, often developed without sufficient transparency, auditability, or explainability. The risk is that such systems may replicate societal prejudices under the guise of neutrality.

Indian legal infrastructure, particularly the Information Technology Act, 2000, is ill-equipped to deal with these complexities. Autonomous decision-making is not covered by Section 79, even though it deals with intermediary liability. Further, there is no statutory obligation for algorithmic audits, impact assessments, or the publication of training datasets. This regulatory vacuum could result in unchecked AI deployment with severe rights implications.

Comparative jurisprudence has promise as a roadmap. A provision of the European Union’s General Data Protection Regulation (GDPR) is the “right to explanation,” which enables anybody to ask for an explanation for automated judgments. By categorizing AI systems according to risk and placing more stringent requirements on high-risk applications, the proposed AI Act advances this strategy. Although India might not follow the exact same models, these frameworks provide insightful guidance on striking a balance between innovation and the defense of human rights.
There is optimism in the Indian judiciary’s developing application of constitutional principles. In Modern Dental College v. State of Madhya Pradesh, the Supreme Court emphasized that one aspect of the right to privacy is the freedom to form one’s own ideas. According to this reasoning, every state use of AI that interferes with a person’s right to autonomy must be examined through the lens of the constitution. Judicial interventions, however, will continue to be restricted to post-facto remedies rather than preventative measures in the absence of legislative backing.
Additionally, in Google India Pvt. Ltd. v. Visaka Industries, the Supreme Court discussed intermediary liability and alluded to the necessity of redefining roles in the digital ecosystem. This becomes relevant as a complex network of actors influencing automated judgments is formed by end users, data providers, and AI developers.

The ethical dimension also warrants attention. AI systems, especially those built using deep learning, often function as black boxes. The opacity of decision-making processes raises fundamental concerns about due process and fairness. If an AI system denies a loan, flags an individual for investigation, or influences parole decisions, the affected party must have the right to know how that decision was made. Ensuring procedural fairness in such scenarios demands legislative innovation.

Conclusion

The law must evolve in tandem with technology. India stands at a crossroads where AI promises immense benefits but also significant threats to fundamental rights. A comprehensive AI legislation is no longer a luxury but a necessity. It must draw from global best practices while remaining rooted in India’s constitutional ethos. For its part, the judiciary must keep defending rights broadly while modifying old ideas to fit new situations. “AI on Trial” is more than a metaphor—it reflects a genuine need for legal interrogation and reform. The responsibility now lies with lawmakers, jurists, and technologists to ensure that the legal system does not merely react to AI’s disruptions but anticipates and guides them.

FAQs

Q1. Can Artificial Intelligence be held legally liable for its actions?
No, AI cannot be held legally liable in the traditional sense because it lacks legal personality. Liability typically falls on developers, deployers, or users depending on the facts of the case and applicable laws.

Q2. What are the key legal challenges posed by AI in India?
Key challenges include data privacy violations, algorithmic bias, lack of transparency, absence of regulation, and difficulty in determining liability.

Q3. Has Indian judiciary addressed AI-related issues directly?
While no landmark case has directly dealt with AI liability, cases like Puttaswamy v. Union of India and Aadhaar judgment provide guiding constitutional principles applicable to AI, particularly around privacy and data protection.

Q4. What lessons can India take away from global AI regulations?

India can learn from the EU’s proposed AI Act and the GDPR, which provide structured approaches to managing risks posed by AI through principles like risk categorization, algorithmic audits, and the right to explanation.

Q5. What legal reforms are needed in India for AI governance?
India needs a dedicated AI legal framework addressing liability, ethics, transparency, and safeguards. Provisions for independent audits, data governance, and public accountability must be central to any reform.

Leave a Reply

Your email address will not be published. Required fields are marked *