WHEN AI BECOMES THE WITNESS Can Artificial Intelligence Replace Human Testimony in Criminal Trials?

ABSTRACT

The increasing usage of Artificial Intelligence in criminal cases, which comprises face recognition technology, predictive policing, audio analysis, and risk assessment algorithms, requires adjustments in the evidence rules which were designed before the advent of digitalization. This paper investigates the possibility of substituting witnesses with artificial intelligence in terms of its usage for testimonial evidence. The research uses the provisions of Bharatiya Sakshya Adhiniyam 2023, Information Technology Act 2000, and constitutional rights provided by Article 20 and 21 to conclude that AI’s results cannot satisfy the basic rules for testimonial evidence such as competence, taking oath, and cross-examination. Therefore, such products must be considered corroborative evidence under ECA, and they shall be disclosed strictly. The paper further examines comparable scenarios in the U.S., UK, and EU, analyses five cases, and proposes guidelines on mandatory transparency in AI, judicial audits of AI, and the recognition of categories of AI evidence.

INTRODUCTION

Human testimony has been considered the cornerstone of determining the truth for decades now. A person giving testimony in court, swearing an oath, and then being cross-examined does more than provide information; the person plays an active role in the process of justice. Nevertheless, the fast adoption of AI systems by law enforcement agencies, surveillance programs, forensic investigators, and other actors engaged in criminal proceedings raises a pertinent issue: can the products of AI, such as facial recognition results, risk scores, deepfake detection tools, and behavioural predictors be regarded as substitutes for witness testimony? This is more than theoretical. In India and internationally, outputs of AI are already influencing bail requests, conviction rates, and sentencing. At present, there are no laws in the Indian jurisdiction defining how the outputs of AI should be treated in criminal proceedings. The present Article analyses legal, constitutional, and procedural implications of substituting witness testimony with outputs generated by artificial intelligence on the basis of international and Indian laws.

 

KEY WORDS

Testimonial Evidence: Statement of witness under oath in oral/written form before the court, to prove material facts, as mentioned in Sections 118-134 of the Indian Evidence Act, 1872 (now extended to the Bharatiya Sakshya Adhiniyam, 2023).

Electronic Record: It means any document, record, image, sound stored or communicated by electronic means, as described in Section 2(1)(t) of the Information Technology Act, 2000.

Expert Witness: It is an expert who helps the court comprehend the issue using his/her specialized experience/knowledge and is covered in Section 45 of the Indian Evidence Act, 1872.

Hearsay Rule: This is the general rule that the out-of-court statements intended to prove the fact that they are true cannot be considered as evidence. Any AI outputs made from third-party data would be prone to hearsay issues.

Chain of Custody: It documents the handling of evidence from its collection till it is produced in the court.

Algorithmic Accountability: The obligation of those who develop or use AI technology to make it understandable, audit-able, and correctable.

Black Box: The problem associated with analysing AI models in order to understand their inner workings and thus raise concerns about discrimination and violation of fair trial requirements.

Confrontation Clause / Right to Cross-Examination: The right of an accused individual to contest the evidence against them, guaranteed by Article 21 of the Constitution of India, and provided by Section 138 of the Indian Evidence Act, 1872.

Admissibility vs. Weight: Admissibility relates to the acceptance of certain pieces of evidence, while weight is the importance attached to these pieces of evidence in court. AI evidence may satisfy admissibility conditions, yet receive low weights due to problems related to reliability.

Mens Rea: Legal concept of proving one’s intention to commit a crime – something impossible for AI. Such feature is differentiating AI evidence from human witnesses who can talk about motivation, state of mind, and emotions.

 

STATUTORY PROVISIONS

A. Legal Structure of Witness Evidence in India Under India’s legal framework, the evidence of witnesses should meet certain criteria – namely competency (Section 118 IEA / Section 107 BSA), compellability, taking an oath, and being cross-examinable (IEA). These conditions are intended to secure the validity of the information as well as give the accused the ability to question evidence against him. AI systems are not eligible to satisfy these conditions. They cannot swear oaths, do not have the legal capacity to testify, and cannot be penalized for perjury based on their testimony under Section 191 of the IPC (currently Section 227 of the BNS). The AI system cannot undergo cross-examination about its personal experience or any other factor that may motivate the output of its analysis. To use its output as evidence without dealing with the abovementioned problems will put an irresponsible party into the trial process.

B. AI Generated Information as Documentary/ Electronic Evidence The outputs generated by AI can be easily admitted as documentary/electronic evidence. This method allows the admission of electronic records if certain procedural requirements, including the production of certificates (Section 65B IEA) is satisfied. There have been numerous cases of admitting information collected from surveillance cameras, global positioning system, and digital forensic technologies as documentary/electronic records. There are still important differences between AI outputs and regular documentary records. Where an electronic record only represents an event, an AI tool analyses, anticipates and adjudges the situation. Where a facial recognition application matches the CCTV footage of an individual and a suspect’s picture, it forms an opinion on the person’s identity just like a witness does. Whether such information needs the standards for expert witness testimony rather than electronic records is what should be determined.

C. Constitutional Considerations under Articles 20 and 21 AI evidence in criminal trials is subject to provisions in the Constitution of India such as Article 20(3) (right against self-incrimination), and Article 21 (right to life and personal liberty). In Selvi v. State of Karnataka, the Indian Supreme Court declared forced lie detection tests unconstitutional since it violates an individual’s right against being forced by the state to make incriminating statements about themselves. This means that AI tools that depend on the accused’s information such as social media posts and other forms of communication metadata could violate the accused’s right against self-incrimination. Additionally, when AI risk assessment tools used in bail and sentencing hearings are not disclosed to the accused, it violates his/her right under Article 21 to understand the basis of the evidence presented against him/her.

D. Difficulty of Cross-examination An important problem related to the idea of using AI as a witness is the difficulty of cross-examination. In accordance with the procedure, defense lawyers have an opportunity to cross-examine any evidence capable of serving as the basis of a criminal conviction. On the other hand, in the case of proprietary algorithmic data which is a secret, the right of cross-examination is denied, which affects the due process.

CASE LAWS

  1. Selvi v. State of Karnataka (2010) 7 SCC 263

  Facts: The state government sought to make the accused undergo narco-analysis, brain-mapping, and polygraph testing to produce evidence, asserting that the findings would be scientifically valid.

Decision: The Supreme Court found that such coercive testing breached Articles 20(3) (self-incrimination) and 21 (life and personal liberty) of the Constitution. The Court stressed that any evidence derived using methods that negate an individual’s free will does not conform to constitutional standards. This precedent applies to AI technology that processes biometric and behavioural data from suspects without their consent.

    2. Arjun Panditrao Khotkar v. Kailash Kushanrao Gorantyal (2020) 7 SCC 1

     Facts: Dispute was made in relation to the admissibility of electronic evidence (CCTV) without Section 65B certificate under Indian Evidence Act, 1872 in a civil matter.

    Decision: The Supreme Court reiterated the necessity of certification of electronic record under Section 65B, which is not merely procedural. The concept of primary and secondary evidence has been highlighted here. This decision is relevant to the AI generated outputs where the electronic records need to be properly documented and certified.

   3.State of Maharashtra v. Dr. Praful B. Desai (2003) 4 SCC 601

Facts: Whether the witness, because of his physical condition, can give evidence through video conference and whether it will comply with the provision of ‘presence’ in Code of Criminal Procedure.

 Decision: Supreme Court held that video conferencing amounted to a valid mode of evidence as per the requirement of ‘presence’. Though this is not an AI case, there are some valuable lessons to be learned: if remote human testimony is valid in court, then AI-reconstructed evidence would also raise questions about its ‘presentation’ before courts. But the problem is the unaccountable nature of AI in such cases.

    4. Commonwealth v. Loomis 881 N.W.2d 749 (Wis. 2016)

 Facts: The defendant challenged his conviction because of a risk score determined using the COMPAS, an AI-based recidivism prediction model. The defendant believed that the use of such proprietary AI technology was not compliant with the due process requirements.

Held: Though the decision favoured the Wisconsin Supreme Court, it acknowledged that a COMPAS score cannot be the only deciding factor for sentencing. Judges must take care not to consider only the score and be aware of the limitations of using AI technologies.

       5. R v. Reed and Reed; R v. Garmson [2009] EWCA Crim 2698

       Facts: The prosecutor wanted to adduce forensic evidence using computer programs, the      working of which cannot be explained by the human expert.

      Held: The Court of Appeal held that expert evidence should include a clear methodology of the procedure followed such that the defense would be able to cross-examine effectively. In case the procedure works as a ‘black box,’ the human expert backing it should have sufficient knowledge of its working. This judgment has implications on the approach to take regarding the forensic evidence by AI: a human expert is necessary who has the knowledge of the AI’s methodology and is cross-examinable.

 

CONCLUSION

It is currently not possible to substitute human testimony in criminal prosecutions under Indian laws with AI. Competence, taking an oath, liability for lying, and especially the possibility of cross-examination are not archaic rituals but key aspects of the adversarial system, which seeks to distinguish between what is truth and what is assertion. This should not be taken to mean that AI has no legitimate place in criminal trials. Where AI outputs are treated as electronically assisted expert testimony, AI can provide useful assistance in generating leads, verifying witnesses’ statements, and supporting forensic investigations.

 

These are the minimum requirements:

Requirement of Algorithmic Transparency: AI software used in criminal cases.

Human Expert Acting as an Intermediary: Any AI output cannot be relied upon as evidence unless produced through a human expert intermediary knowledgeable about the underlying algorithm.

Non-Determinative Use of AI Outputs: AI outputs, including risk scores and identification matches, cannot be relied upon as the sole or principal reason to convict, detain, or impose unfavourable sentences.

Reforms in Legislation: Chapter IX of the Bharatiya Sakshya Adhiniyam, 2023 must deal exclusively with AI evidence. A decision-making juncture now exists for India’s courts. Overreliance on algorithmic authority can introduce bias and vagueness in criminal justice processes. A pragmatic balance between utilizing the evidential benefits of AI systems while ensuring constitutional protections is both possible and imperative.

FREQUENTLY ASKED QUESTIONS (FAQ)

Q1. Is the use of AI-generated evidence permitted in the Indian criminal court system? A. Yes, partially. Evidence generated using AI and recorded electronically can be admitted as per Section 63 of the Bharatiya Sakshya Adhiniyam, 2023, provided that it is certified accordingly. In contrast, there is no statutory provision regarding the admissibility of AI-generated analytical evidence, including facial recognition results or risk assessments.

Q2. Is a facial recognition system’s output sufficient for making an arrest or conviction? A. No. Given the high rate of errors reported in facial recognition software and its disproportionate accuracy against people with dark skin colors, basing arrests and convictions on such evidence would infringe the fundamental rights granted under Article 20 and 21 of the constitution. There is corroborative evidence and expert testimony required. The NCRB in India had deployed the facial recognition process but did not base any conviction solely on it.

Q3. To what extent does the right to cross-examination pertain to AI evidence? A. It would not be possible to fully exercise the right to cross-examination under Section 138 of the Indian Evidence Act and the equivalent BSA provision since it pertains to a person. The courts have ruled that there must be a human expert with a good grasp of the AI model’s functioning and available for cross-examination, based on the principle laid down in the case of R v. Reed. The accused should be able to cross-examine this individual on his or her understanding of the algorithm, the training data set used, and the error rate of the machine. Q4. What is the ‘black-box problem’, and how relevant is it in criminal proceedings? A. The black-box problem pertains to the difficulty associated with interpreting or auditing the decisions taken by advanced algorithms like neural networks. This is a very serious matter in criminal law, as the accused needs to understand and challenge the charges against him. It is a violation of Article 21 to present evidence using an AI model which makes decisions in a manner not open to the accused.

Q5. Is there any global regulation regarding AI in criminal justice? A: Yes. The EU’s Artificial Intelligence Act (2024) places AI systems used in criminal justice, such as risk assessments, recidivism prediction, and personal criminal risk analysis, under ‘high-risk’ categories, which entail compulsory conformity assessments, transparency, and human supervision. In the US, numerous state regulations restrict algorithmic sentencing applications. There are no comparable laws in India at present, making the judiciary’s involvement even more important.

 

 

Author: MUHAMMED AJMAL M, BBA LLB(HONS.) SCHOOL OF LEGAL STUDIES, COCHIN UNIVERSITY OF SCIENCE AND TECHNOLOGY

 

Leave a Reply

Your email address will not be published. Required fields are marked *