The Human Touch in a Courtroom: AI Evidence, Admissibility and Judicial Discretion in India.


Author: Manaswini Shetty from NIMS University, Rajasthan


To the Point


The contemporary Indian court-room is at the threshold of a technology driven change. Given the increasing prominence of Artificial Intelligence, in evidence collection, forensic analysis, surveillance, predictive policing, and digital authentication, the justice system is forced to reexamine elements of its basic legal doctrines. As AI opens up the prospects of speed, accuracy and efficacy, it also virtually breathes down the necks of the time-tested judicial ethos that is partly based on human norms like reason, empathy and discretion. At the heart of this new paradigm is one central question: Can evidence that is created by AI be used to prove a fact? The solution is not technology alone but the checks and balances provided by the Constitution, statutory interpretations, evidentiary rules and the human touch that continues to provide justice in India.


Abstract


AI, once a theoretical possibility in legal decision-making, is now an empirical fact. Whether facial recognition used in criminal investigations or AI-driven investigative reports, the Indian courts are increasingly engaging with data-driven narratives that seek to be both legally admissible and discretionally interpretable. But the institution’s longer memory, based on procedural fairness and basic human justice push back against a world run entirely by machines. This paper looks at the tangled relationship between artificial-intelligence outputs and Indian rules of evidence. It examines the tension between cold, mechanical accuracy and the messy intuition of people, focusing on Sections 3, 45, 65B and 73A of the Evidence Act of 1872. It also argues that judges still act as important gatekeepers, deciding case by case whether AI material is relevant, trustworthy, and capable of proving a point. In doing so, Indian courts try to protect constitutional values and basic fairness while slowly stepping into a future where machines assist, but do not replace, justice.


Use of Legal Jargon


When Indian judges consider proof created by artificial intelligence, they wrestle with familiar terms such as ‘relevance’, ‘expert opinion’, ‘authentication’, ‘admissibility’, and ‘judicial notice’, all framed by the Indian Evidence Act of 1872. The opening line of Section 3 describes ‘evidence’ broadly, accepting oral or documentary material, so courts might admit algorithm-generated pages if their origin can be clearly traced. The same Act asks in Section 45 whether a statement is from an expert who carries “special knowledge, skill, or experience”. That rule was made for people, so extending it to code that has neither awareness nor intent quickly becomes a legal puzzle. Judges might treat an AI output as an extension of a specialist’s toolbox, insisting, therefore, that a human double-checks every conclusion the program produces. Section 65B, which speaks to electronic records, also comes into play. To be accepted under 65B (4), such records need an official certificate, yet compliance grows tricky when the algorithm runs on cloud nodes spread worldwide and operates autonomously or through decentralized servers.


Time-honoured principles or Legal Maxims like “audi alteram partem” (hear the other side) and “falsus in uno, falsus in omnibus” (false in one thing, false in everything) prove to be a doctrinal hurdle because a machine cannot take the stand, face cross-examination, or account for the inner workings of its own decisions. Furthermore, the “black box problem,” which occurs when algorithms lack clarity or transparency, limits the applicability of the “res ipsa loquitur” (the thing speaks for itself) doctrine. Furthermore, because algorithms function without intention or knowledge, it is difficult to reconcile AI evidence with the fundamental criminal liability principles of “actus reus” (guilty act) and “mens rea” (guilty mind). Finally, the foundation of all evidence appraisal processes is “judicial discretion,” which refers to the court’s power to consider factors like credibility, motive, context, and the probative-versus-prejudicial balance—all of which call for a level of human perception that artificial intelligence cannot match.


The Proof


The judicial system’s dependence on proof must meet two key requirements: the source’s legitimacy and the content’s integrity. In the context of AI, this leads to a tricky situation. For example, if an AI-based facial recognition system identifies a suspect, the proof must demonstrate the accuracy of the algorithm’s training data. It also needs to show there is no bias, error, or manipulation. Without proper audits, transparent source code, or accountability for the algorithms, the credibility of such evidence is at risk. The issue gets even more complicated in civil cases involving AI contracts, algorithmic trading, or data-driven consumer profiling, where autonomous code might handle the whole transaction without human input.


Evidence must also be the “best evidence,” which means it must be unique and unaltered, in order to be considered proof. In situations related to AI, figuring out what the “original” is can be tricky. Take an AI-generated predictive report used in sentencing. If it comes from a hidden algorithm trained on large datasets, there are valid worries about its authenticity and bias. This is especially true if the historical data includes social or racial biases. The judiciary must ensure that this evidence does not violate Article 14 of the Indian Constitution, which guarantees equality before the law. Additionally, a machine cannot take an oath or be cross-examined, which greatly weakens the rules of a fair trial under Article 21.
For evidence to be admitted into evidence, it must be authentic, relevant, and permitted by law. Unless backed by human experts or verified through a chain of custody, this evidence might raise legal doubts. Therefore, judicial discretion is vital in deciding whether to accept or reject AI-generated materials as proof. Courts frequently rely on additional human interpretation or supporting oral testimony to maintain procedural and constitutional fairness.


Case Laws


Anvar P.V. v. P.K. Basheer (2014)
The Supreme Court of India provided more information on the meaning of Section 65B of the Evidence Act in the case of Anvar P.V. v. P.K. Basheer (2014) 10 SCC 473. It made the inclusion of a certificate in electronic evidence required by Section 65B (4). Although this ruling focused on email evidence, it established requirements for all electronic evidence, including AI-generated reports, to meet strict admissibility standards.

Justice K.S. Puttaswamy (Retd.) v. Union of India (2017)
In Justice K.S. Puttaswamy (Retd.) v. Union of India (2017) 10 SCC 1, the Supreme Court emphasized Article 21’s right to informational privacy. This has consequences for AI systems that handle personal data for evidence. Any violation of privacy during evidence collection by AI systems, without proper safeguards, risks being declared unconstitutional.
Avnish Bajaj v. State (NCT of Delhi) (2008)
In Avnish Bajaj v. State (NCT of Delhi) (2008) 150 DLT 769, which addressed electronic communications as evidence, the Delhi High Court’s view on intermediary liability is relevant in AI situations. If an AI platform accidentally hosts or shares illegal content, determining criminal liability is not straightforward. This highlights the necessity of human supervision and discernment.


Conclusion


The use of Artificial Intelligence in the Indian judicial system holds both significant promise and serious risks. AI can improve the consistency or uniformity and effectiveness of evidence analysis. However, it also risks undermining the safeguards that protect individual freedoms and ensure a fair trial. The Indian Evidence Act was not designed to address the intricacies of self-sufficient technologies because it is rooted in a colonial legal framework. Despite this, Indian courts have attempted to address these challenges through thoughtful interpretation and careful discretion, all while upholding the basic principles of natural justice.
The primary concern Is not whether AI should be used in evidence, but rather how it should be used, including any necessary judicial oversight, interpretative filters, and procedural safeguards. The human presence in the courtroom isn’t just an emotional element; it embodies the conscience of the law. This includes an understanding of context, motive, and circumstance—qualities that no algorithm can replicate. Judicial discretion serves as a vital protection, ensuring that justice remains humane, fair, and tailored to individuals. In welcoming AI, the courts must keep their moral standards. They should consider AI a tool, not a ruler, in their efforts to achieve justice.


FAQS


How is AI evidence applied in Indian courts, and what does it mean?
Data, reports, analyses, or forecasts produced by AI systems and utilized in court cases are referred to as AI evidence. AI-based predictive analytics, voice evaluation, automatic surveillance interpretations, and facial recognition results are all included in this. Such evidence must adhere to conventional evidentiary standards and is used with caution by Indian courts.

Is evidence produced by AI admissible in India?
Yes, but if it’s electronic in nature, it needs to meet the requirements outlined in Section 65B of the Indian Evidence Act. This includes the requirement for a certificate attesting to the evidence’s legitimacy. Courts also use human discretion and oversight to assess such evidence.

Does Section 45 of the Evidence Act allow an AI system to be regarded as an expert?
Because AI systems lack consciousness and autonomous judgment, the law does not currently recognize them as “experts.” Although AI can’t give testimony or be cross-examined, human experts may use AI outputs to bolster their claims.

What are the main challenges facing the use of AI in courtrooms?
The main obstacles are the black box problem (algorithms’ lack of transparency), potential bias in training data, lack of accountability, authentication issues, and AI’s incapacity to undergo cross-examination or take legal oaths.

Leave a Reply

Your email address will not be published. Required fields are marked *