AI IN JUDICIAL DECISION- MAKING: BOON OR THREAT TO NATURAL JUSTICE

Author: Ashwina Venkatramanan, School of Excellence in Law, The Tamil Nadu Dr. Ambedkar Law University, Chennai


TO THE POINT


The legal system is grounded in the principles of fairness, impartiality, and human reasoning. As courts face increasing case backlogs, AI emerges as a technological solution aimed at accelerating judicial processes. Tools like machine learning algorithms can analyze past precedents, predict litigation outcomes, and even suggest sentencing models. However, the involvement of non-human intelligence in determining justice has sparked a vigorous debate: Can AI uphold the ideals of natural justice, or does it risk dehumanizing judicial functions?


ABSTRACT


Artificial Intelligence (AI) is steadily transforming various facets of governance, and the judiciary is no exception. From predictive analytics in sentencing to AI-driven legal research, automation is increasingly embedded into judicial decision-making processes. While AI promises efficiency and consistency, it raises pressing concerns regarding fairness, transparency, and the preservation of natural justice. This article explores whether AI in the judiciary is a boon or a threat to the principle of natural justice by examining legal doctrines, case laws, and current AI practices in courts worldwide.


LEGAL JARGON AND PRINCIPLES INVOLVED


Natural justice, a foundational concept in administrative and constitutional law, rests on two primary principles: “audi alteram partem” (hear the other side) and “nemo judex in causa sua” (no one should be a judge in their own cause). AI’s role in judicial systems potentially challenges these principles. For instance, algorithmic decision-making often lacks transparency (also known as the “black box” problem), making it difficult to challenge or understand its rationale, thus violating procedural fairness.
Terms such as algorithmic bias, automated reasoning, and predictive adjudication are increasingly entering legal discourse. These terms represent a paradigm shift in how justice is conceptualized and delivered.


THE PROOF: GLOBAL PRACTICES AND RISKS


Artificial intelligence has already begun to influence judicial processes around the world. In the United States, a risk assessment tool known as COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) is utilized to aid judges in determining bail and sentencing by evaluating the likelihood of reoffending. However, it has been criticized for racial bias and lack of accountability. In Estonia, a “robot judge” prototype was developed to handle small claims disputes. China employs AI in thousands of courts for evidence analysis and legal documentation.
India, though still in the nascent phase of adopting AI in judiciary, has launched initiatives such as SUPACE (Supreme Court Portal for Assistance in Court Efficiency), aimed at supporting judges by streamlining legal research and improving case analysis. While these tools enhance productivity, there is currently no legislative or judicial framework regulating AI’s use in decision-making, raising questions about its unchecked deployment.


RELEVANT CASE LAWS


Since 2021, the Supreme Court of India has been utilizing an AI-based tool to assist judges by organizing and presenting relevant legal information. Importantly, this system does not play any role in the final decision-making process. Another notable innovation is SUVAS (Supreme Court Vidhik Anuvaad Software), a translation software developed to convert legal documents between English and various regional languages, promoting accessibility across linguistic barriers.


1. JASWINDER SINGH V. STATE OF PUNJAB (2024)
In a case involving a serious assault, the Punjab and Haryana High Court refused to grant bail to the accused. During the hearing, the judge referred to ChatGPT to explore general perspectives on bail in cases involving severe cruelty. It was clarified, however, that this AI consultation was purely academic and had no bearing on the actual merits of the case. The trial court was advised not to consider the AI-generated content, which was used only to enrich the judge’s understanding of broader legal principles related to bail.


2. ANIL KAPOOR V. SIMPLE LIFE INDIA & ORS. (2023)
The Delhi High Court addressed the misuse of the actor’s personality rights by sixteen defendants who had used generative AI to create deepfakes, imitate his voice, and falsely endorse products using his name, image, and likeness. The Court recognized the growing threat of AI-enabled impersonation and emphasized the need to protect celebrities from unauthorized exploitation. Justice Prathiba M. Singh underscored that courts cannot ignore such misuse, highlighting the urgency for stronger legal frameworks beyond the right to privacy under Article 21 of the Constitution. It establishes an important benchmark for protecting personal identity in an era increasingly shaped by artificial intelligence.


3. LOOMIS V. WISCONSIN, U.S. (2016)
In this pivotal case, the Wisconsin Supreme Court addressed the use of COMPAS, a proprietary AI-based risk assessment tool, in sentencing. Eric Loomis received a six-year sentence partly because of COMPAS’s high risk score. He appealed, arguing that using an opaque algorithm infringed on his due process rights. Although the U.S. Supreme Court declined to hear the case, the Wisconsin court stressed that courts must warn defendants about the tool’s limitations and allow them to challenge its results. The case highlights critical issues of transparency and accountability arising from the use of AI in judicial decision-making.


4. VICTIM-IMPACT AI VIDEO, ARIZONA (2025)
In a groundbreaking case in Arizona, an AI-generated video featuring the deceased victim expressing forgiveness was presented at the sentencing of Gabriel Horcasitas, who was convicted of manslaughter. The judge relied on this digital testimony, resulting in a longer sentence (10.5 years) than prosecutors recommended. While this practice aims to enhance victim impact, legal experts caution against the potential misuse of deepfake and AI-manipulated evidence.


CONCLUSION


AI in judicial decision-making has undeniable potential: speeding up processes, reducing human error, and handling voluminous data. However, justice is not merely a mechanical outcome, it is a moral and social function requiring empathy, discretion, and contextual judgment. Until AI systems become fully transparent, accountable, and regulated, they should be used as assistive rather than determinative tools. The judiciary must maintain human oversight to preserve the sanctity of natural justice.


FAQS


1. Can AI replace judges completely?
No, AI can assist in research and pattern recognition but lacks moral reasoning, empathy, and interpretive capacity required in judicial decisions.
2. Is AI used in Indian courts today?
Yes, Courts use AI through tools like SUPACE primarily in research and case management through. However, it is not yet used to make binding decisions.
3. What are the risks of using AI in justice?
The primary risks include algorithmic bias, lack of transparency, and potential violation of natural justice principles.
4. Are there any laws regulating AI use in judiciary?
As of now, no dedicated legislation governs AI in judicial functions in India or most other countries.
5. How can AI be made safe for judicial use?
By ensuring transparency, human oversight, data protection, and algorithmic accountability through clear laws and ethical standards.

Leave a Reply

Your email address will not be published. Required fields are marked *