Author:- Heemani Amarsingh Rajput ,
BVDU New Law College, Pune
Abstract
The increasing use of artificial intelligence (AI) in legal procedures, from facial recognition and predictive surveillance to algorithmic risk assessments, crucial concerns about equity, transparency and the right to a fair trial arise. This article explores the implications of admitting evidence generated by AI in court. Examine due procedural process, probative admissibility, the right to confrontation and opacity of algorithmic decision making. Through national and international jurisprudence, as well as legal doctrines, we highlight the growing tension between technological efficiency and constitutional guarantees of justice.
Introduction
IA technologies are revolutionizing several sectors, and the justice system is no exception. From the scan of surveillance images to predict criminal behavior, AI is increasingly based on the application of the law and prosecution. However, the use of the Court of evidence generated by AI introduces a new set of legal and ethical challenges. This article investigates whether the dependence on the tools of the UA Socava the principles of due process and the fair trial guaranteed by the Law on Constitutional and Human Rights.
1. AI Evidence Rise
AI-related evidence refers to, analysis, or processed information received through Artificial Intelligence Technologies, including:
• Facial Identification Software
• Future police equipment
• Natural language processing (to analyze documents/emails)
• Algorithm Risk Assessment Equipment in punishment and bail hearing
• Deepfake Detection and Surveillance Analysis Tools
2. Legal Framework: Fair trial and due process
The right to just evidence is enshrined in:
• Article 21 of the Indian Constitution: Interpreted to include procedural equity.
• Article 14 (1) of the International Covenant on Civil and Political Rights (ICCPR): guarantees the equality of all persons before the courts and courts.
• Sixth amendment of the United States Constitution: provides the right to confront the accuser.
When AI becomes a “witness” in court, several fundamental legal principles come into play:
• Admissibility of evidence
• Right to confrontation
• Presumption of innocence
• Right to an effective lawyer
3. Key challenges to admit evidence of AI
a. Opacity and the problem of the “black box”
Many AI models, especially deep learning systems, function as black boxes, which makes it impossible to understand how decisions were made. The courts struggle to analyze such precision or bias evidence.
b. Owner algorithms
Technological companies often refuse to reveal their AI algorithms, claiming commercial secrets. This restricts defense lawyers to challenge the methodology used in the generation of evidence, as seen in Loomis.
c. Bias and discrimination
AI systems can reinforce or magnify existing biases in the data. For example, facial recognition has shown higher error rates for minorities, leading to unfair arrests.
d. Chain of custody and authenticity
When AI is used to process or generate evidence (for example, detection of deep defaques or improvement improvement), questions arise regarding the integrity and authenticity of said data.
e. Violation of the right to confront witnesses
If the conclusions generated by the replacement of human testimony, undermines the right to confront and interrogate the source of evidence.
4. Evidentiary Standards and Admissibility Tests
India:
Under the Indian Evidence Act, 1872:
•Sections 3 and 65B govern admissibility of electronic records.
•AI evidence must be authenticated and meet relevancy and reliability standards.
5. Judicial Trends and International Case Law
•India: In Anvar P.V. v. P.K. Basheer (2014), the Supreme Court emphasized the need for strict compliance with Section 65B for electronic evidence.
•UK: In R v. T (2010), the Court of Appeal criticized the use of untested forensic techniques, establishing that expert evidence must be demonstrably reliable.
•Canada: Courts require that novel scientific techniques be proven to be reliable before admitting AI-based evidence, aligning with R v. Mohan (1994).
•European Court of Human Rights (ECtHR): In cases such as Ramanauskas v. Lithuania, the Court stressed that covert methods and electronic evidence must be judicially reviewable to ensure a fair trial.
6. Procedural Safeguards and Recommendations
To safeguard due process, courts and legislators must establish:
•Algorithmic Transparency: Mandate disclosure of AI methodologies, source code (under judicial review), and training datasets.
•Expert Witness Scrutiny: Require human experts to explain AI-generated outputs in court.
•Pre-trial Disclosure: Make all AI-related evidence available to the defense well in advance.
•Bias Audits: Courts must consider whether the AI tool has undergone independent bias and reliability testing.
•Human Oversight: Final decisions should always be subject to human judicial discretion, not fully automated tools.
7. Ethical and Constitutional Concerns
The use of AI in criminal trials brings ethical concerns beyond legal frameworks:
•Automation of Justice: Delegating decisions to machines may erode human empathy, discretion, and contextual analysis.
•Due Process vs. Efficiency: Speed and scale cannot override the core principle of individualized justice.
•Digital Divide: Defendants lacking resources may not have expert support to challenge AI tools used against them.
Conclusion
AI technologies hold the potential to enhance evidence collection and streamline judicial procedures. However, their opaque nature and potential for bias raise significant concerns about fairness, transparency, and accountability. Until robust legal frameworks and technical standards are in place, courts must tread carefully. Protecting the constitutional rights of the accused and ensuring a fair trial must remain paramount—even in an age of algorithmic efficiency.
Frequently Asked Questions (FAQ)
Q1. What is AI-generated evidence?
AI-generated evidence refers to information produced, processed, or analyzed by artificial intelligence tools, such as facial recognition systems, predictive policing algorithms, or risk assessment software used in criminal justice.
Q2. Is AI evidence legally admissible in court?
Yes, but only if it meets the legal standards of admissibility—such as relevance, authenticity, reliability, and fairness. In India, this is governed primarily by Sections 65B and 3 of the Indian Evidence Act, 1872.
Q3. What are the major legal concerns with AI evidence?
•Lack of transparency (“black box” algorithms)
•Bias and discrimination in datasets
•Infringement of the right to confront evidence
•Inability to challenge proprietary software
•Errors leading to wrongful convictions
Q4. Has the Indian judiciary addressed AI evidence specifically?
While AI-specific rulings are still developing, Indian courts have addressed related issues in cases involving electronic evidence (Anvar P.V. v. P.K. Basheer, 2014). Courts require strict compliance with procedural safeguards under Section 65B.
Q5. Can the defense challenge AI-generated evidence?
Yes. The defense can question the reliability, methodology, and potential biases in the AI system used to generate the evidence. However, challenges are difficult if the algorithm is proprietary or lacks documentation.
Q6. Are there any real-life cases of wrongful conviction due to AI?
Yes. In the U.S., Robert Williams was wrongly arrested based on faulty facial recognition. Similar risks exist in other jurisdictions where AI outputs are blindly trusted.
Q7. What safeguards should courts apply before accepting AI evidence?
•Full disclosure of the algorithm and data used
•Independent expert testimony
•Opportunity for cross-examination
•Bias testing and reliability checks
•Final human review before decision-making
Q8. Does AI evidence violate the right to a fair trial?
It can, if not properly regulated. Use of AI evidence without adequate transparency, cross-examination rights, and bias detection could violate fundamental rights under Article 21 of the Indian Constitution and international human rights treaties.
Q9. What is the “black box problem”?
The “black box problem” refers to the lack of interpretability of many AI models—especially deep learning systems. Judges, lawyers, or defendants may not understand how or why the AI reached a certain conclusion.
Q10. Is India regulating AI in the legal system?
Currently, India lacks a dedicated legal framework for regulating AI use in courtrooms. However, discussions are underway within legal and policy circles, and the judiciary is cautiously evaluating such evidence within existing legal doctrines.
