Author: Mandeep Singh, IIM Rohtak
Abstract
India’s criminal justice system is evolving due to artificial intelligence (AI). Traditional evidence laws are adjusting to the realities of technology with the introduction of the Bharatiya Nyaya Sanhita (BNS) and the Bharatiya Sakshya Adhiniyam (BSA) in 2023. This article explains how AI impacts criminal investigations, charge framing, trials, and sentencing, highlighting changes under BNS/BSA, while also discussing challenges in authenticity, admissibility, and fairness. It concludes by suggesting balanced approaches to embracing AI without compromising justice.
To the Point
AI in Criminal Investigations
Law enforcement agencies are increasingly using AI tools, such as facial recognition and predictive analytics, to solve crimes. The Bharatiya Nyaya Sanhita (BNS), 2023, explicitly recognizes cybercrimes including identity theft, cyberstalking, and deepfake technologies. Police and investigators can now officially rely on AI-based digital evidence. However, AI outputs must always be verified by human oversight to prevent wrongful implications due to algorithmic errors or biases.
AI and Charge Framing
Judges review the evidence during the charge-framing phase to determine whether there is sufficient justification to bring charges against an individual. Under the Bharatiya Sakshya Adhiniyam (BSA), digital records, including AI-generated reports, are treated as primary evidence. Yet, the law demands strict verification. Prosecutors must provide authenticity certificates or expert testimonies about how the AI-generated evidence was obtained. Courts will reject AI evidence if proper documentation is missing, emphasizing that technology alone isn’t sufficient without human verification.
AI-Generated Evidence at Trial: Admissibility and Reliability
When AI evidence reaches the courtroom, judges rely on BSA guidelines, which explicitly recognize digital documents, making them equal to traditional paper evidence. Strict authentication of electronic evidence is required by Section 63 of the BSA, which includes supplying comprehensive certificates attesting to the accuracy and dependability of AI outputs.
Defence teams are fully permitted to cross-examine expert witnesses’ explanations of AI processes and any potential biases. Maintaining an unbroken chain of custody for digital evidence is critical; any breach could lead courts to doubt the reliability and discard the evidence.
AI’s Role in Sentencing
While AI isn’t officially mandated for sentencing decisions in India, discussions are ongoing about using AI-based risk assessments—tools that predict criminal reoffending—to guide sentencing. The BNS does not yet prescribe specific AI sentencing guidelines, leaving judges free to consult AI-generated assessments cautiously. Courts must ensure transparency and fairness, examining algorithmic biases or inaccuracies closely. Ultimately, sentencing remains the exclusive domain of judges, with AI serving as supportive, not decisive, evidence.
Legal Terms Simplified
Digital Record: Electronic data (emails, videos, AI analyses) treated legally as “documents.”
Chain of Custody: Clear documentation of how digital evidence is handled from collection to court presentation, ensuring no tampering occurs.
Expert Testimony: Specialists (such as forensic or tech experts) who explain complex AI evidence to courts.
Probative Value: How effectively evidence proves something important in a trial. Reliable AI evidence has high probative value.
Hearsay (AI context): Normally refers to indirect human statements. Machine-generated data generally isn’t hearsay, but human-influenced AI outputs might need special consideration.
Relevant Statutes and Legal References
Bharatiya Sakshya Adhiniyam (BSA), 2023:
Recognizes electronic/digital evidence explicitly.
Requires certificates of authenticity under Section 63 for electronic evidence, similar to earlier Section 65B requirements.
Bharatiya Nyaya Sanhita (BNS), 2023:
Modern penal code addressing cybercrime explicitly, like creating fake digital content (deepfakes).
Criminalizes fabricating and using false electronic evidence (Sections 228-233).
Specifically criminalizes digital forgery and defamation through electronic means.
Information Technology Act, 2000:
Complements BNS/BSA, penalizing identity theft and digital impersonation (Sections 66C, 66D).
Case Laws Influencing AI Evidence Use:
Anvar P.V. v. P.K. Basheer (2014): Landmark ruling mandating authenticity certificates for electronic evidence, emphasizing strict standards.
Arjun Panditrao Khotkar v. Kailash Gorantyal (2020): Confirmed strict compliance with authenticity rules; clarified exceptions narrowly.
Selvi v. State of Karnataka (2010): Prohibited involuntary investigative techniques violating constitutional rights, suggesting courts will cautiously handle intrusive AI methods.
Conclusion
Under the new Bharatiya Nyaya Sanhita and Bharatiya Sakshya Adhiniyam, AI is transforming evidence law. Although AI-generated evidence is now officially recognised by the legal system, strict verification of its validity and dependability is still required. Efficiency and fairness must be balanced. AI can greatly assist courts as long as human oversight is maintained, biases are managed, and technology is kept open and transparent. In order to fully utilise AI’s advantages without compromising justice or equity, ongoing judicial training and unambiguous procedural standards will be essential.
FAQS
Q1. What are BNS and BSA, and why do they matter?
BNS (2023) replaces the Indian Penal Code to tackle modern crimes explicitly. BSA (2023) updates evidence law, explicitly recognizing digital and AI evidence, ensuring modern technology is legally admissible.
Q2. Is AI evidence allowed in courts now?
Yes, under BSA, AI-generated evidence is admissible, but must pass stringent tests for authenticity, reliability, and transparency, usually backed by expert testimony and certificates.
Q3. Can AI itself testify in court?
No, AI can’t testify directly as a witness because it’s not legally a person. Its outputs can be presented through human experts who explain the process and certify accuracy.
Q4. How does the law handle deepfakes and fake digital evidence?
BNS criminalizes creating or using false digital evidence. Courts strictly require authenticity certificates, and forensic experts routinely analyse digital evidence to detect manipulation.
Q5. Have Indian courts already addressed AI-generated evidence?
Not directly at higher courts, but existing rulings like Anvar (2014) guide digital evidence handling. Courts have indirectly dealt with AI through electronic evidence cases and remain cautious and thorough in handling digital technology.
Q6. Will judges rely on AI tools for sentencing?
Currently, AI isn’t officially adopted for sentencing. Judges may cautiously use AI to inform their judgments, ensuring fairness, transparency, and absence of biases.
Q7. What safeguards ensure AI evidence is reliable?
Certificates of authenticity, maintaining a clear chain of custody, expert testimony, transparent algorithm explanations, and judicial scrutiny provide robust safeguards against unreliable AI evidence.
References
Bharatiya Nyaya Sanhita, 2023.
Bharatiya Sakshya Adhiniyam, 2023.
Information Technology Act, 2000.
Law Commission of India Reports on Evidence and Digital Records (relevant recent reports).
EU AI Act Draft, for comparative perspective.
