AI-Generated Evidence in Indian Courts: The Next Legal Frontier

Author: Swaraj Pandey


To the Point
The modern period of the Indian legal order is a trans-formative period where artificial intelligence has not only evolved into a mere hypothetical possibility but has also become an actual entity operating within the justice delivery framework. Already, an increasing number of artefacts generated with AI have infiltrated the evidentiary continuum deepfakes, machine-generated audio, and chatbot conversation are all currently invading the evidentiary space. Police departments have infiltrated piloted forensic investigation applications of AI based systems. Even in the short term then, it may be possible that criminal trials will include electronic evidence produced by non-human actors only. The possibility opens up some thorny issues regarding the substance of credible evidence and procedures of admissibility alongside protection mechanisms against injustice.
All the conventional electronic evidence, such as email messages, or closed-circuit television video recordings, has an identifiable author and source, in contrast to AI-generated material which is often characterised by a lack of transparency regarding source and creator. The suite of software that creates that kind of material can be conditioned in biased data, can produce uncertain results and, self-educating, be able to change the content on an ongoing basis. It is a dogmatic edifice that relies too much on the concept of human authorship, intent, and chain-of-custody protocols to a fault, which rubs head on with machine-made artefacts. This entails the respective questions with a certain acridity: Does a synthetic video really have a creator in the legal meaning? Can AI-generated documents be cross-examined? And does an accused also have a right to challenge the computerized algorithm that produced the evidence used to convict it? These dilemmas cut to the core of Article 21-enshrined fair-trial rights, and so it is deemed necessary that these jurisprudential clearings need to be undertaken with immediacy.
The fragmentary legal advice is provided by section 65B of Indian Evidence Act, which governs other electronic records. The requirement is to present authenticating certificate, but the provision remained silent on probabilistic algorithmic evidence that may not overcome the deterministic test that is customary of evidentiary certificates. Also, delivering the necessary certificate should happen under the signature of an individual who has control over the computer; however, when AI tools are deployed abroad, when they are under the development of third-party institutions or disseminated in open-source conditions, such requirements present a problem.
Overall, the Indian juridical order stands at a crossing point. With no conclusive legal interpretation, the increased use of AI in an evidentiary sphere threatens to undermine the intrinsic guaranteed by the Constitution, fairness, and procedural integrity.
Use of Legal Jargon:
The existing jurisprudential system faces the serious contradiction between the traditional course of action and the enormity of AI-produced material. The best evidence rule, which looks to the original document and not to any series of copies, would seem to be a problematic situation when the original document under consideration is not an original document at all, but a synthetic one. Similarly, the principle of hearsay, i.e., the inadmissibility of out-of-court statements, is where difficulties arise when the speech generated by AI tries to mimic a human voice. The judicial notice that courts can accept some of the facts that are universally true is problematic in its application to algorithmic conclusions whose methodology is obscure or proprietary.
There are other dilemmas accompanied by the burden of proof in criminal proceedings. According to conventional norms, evidence claims of guilt beyond reasonable doubt on the part of the prosecution should be established, but the authenticity of a synthetic video is subject to challenge. In the event that the defence presents an argument that the video has been altered by an algorithm, who stands to prove the tampering case? These privacy debates demonstrate the widening misfit between traditional legal language and evolving AI-driven world.
Another doctrinal dimension concerns mens rea. Is AI-generated evidence admissible in proving the element of intent in a criminal case of fraud involving synthetic chat transcripts and can the courts be sure that the chat reflects human intent? Or is it one of the hallucinations generated by the AI with the help of unregulated chatbots trained on possibly flawed data? The actus reus Mens rea model in criminal liability encounters a challenge whenever it comes to the intervention or simulation of human behaviour by machines.


The Proof
Globally, such AI-created content has already been reused to influence mass opinion, commit fraud and harass people. In the US, deepfakes have been deployed in cases of libel and posting of revenge-porn. Courts in China have struggled with chat records and contracts that were written by AI. The AI Act that is going to be implemented by the European Union in the immediately near future defines deepfake content and also facial recognition technology as high risk, which necessitates close monitoring. In contrast, India has not produced any form of legislation in handling the AI-generated content, leave alone its status as evidence.
Perspectives within India’s judiciary remain largely limited and tentative. In K.S. Puttaswamy v. Union of India, the Supreme Court recognized the difficulties of the emergent technology to privacy rights, but it did not stipulate AI-specific framework. In the case of Anvar P.V. v. P.K. Basheer, the Court defined that electronic records can only be admissible with Section 65B being met, but the judgement was based on email and metadata and not on machine generated evidence. Arjun Panditrao Khotkar v. Kailash Kushanrao Gorantyal was also in favour of the obligatory use of 65B certificates without touching the issue of evidence produced by AI.
This problem is further aggravated by the fact that there is poor digital infrastructure and expertise in courts in India. Although judicial officers will be able to interpret the law, they are unlikely to have the expertise necessary to detect deepfakes or audit AI algorithms or understand how a natural language processing model works. This means that financial differences will enable the richer parties to capitalize on the use of these technological tools as less-resourced players grapple with the ambiguous effects.


Case Laws
The jurisprudential treatment of artificial intelligence (AI)-generated evidence in India remains at an initial developmental phase, as no constitutional court has issued a ruling specifically addressing its admissibility, reliability, or procedural safeguards. However, current cases involving electronic evidence, digital verification, and consent in technologically based investigations give intermediate contours of how the courts might react in the further existence of machine-generated verification.
A pivotal moment in India’s digital evidence jurisprudence occurred in Anvar P.V. v. P.K. Basheer (2014), where the Supreme Court established a stringent standard for admitting electronic records. The Court held that secondary electronic evidence i.e., CDs, printouts or any reproduction of an electronic data would be admissible in the case wherein it is accompanied by certificate under Section 65B of the Indian Evidence Act. This certificate should prove that the electronic record was generated by a device that is used regularly, and the output of the device is correct. Accordingly, parties can no longer tabled unauthenticated digital materials without risk of being excluded, even in cases when authenticity is not in contention, and clear distinction with previous rulings that had enabled consideration of oral evidence or presumption, as used by courts.
This aspect made the Anvar decision a landmark and six years later, it was recited in Arjun Panditrao Khotkar v. Kailash Kushanrao Gorantyal (2020). In this case, the Supreme Court addressed the compulsory nature of Section 65B directly, clarifying that the requirement of a certificate under Section 65B (4) is not merely procedural but a substantive condition for admissibility. In a situation where a party cannot present this certificate, the electronic evidence will not be regarded by the court no matter how good its probative value is. This primacy to procedures means that in the context of AI-generated content it has deep ramifications. Once created or changed with the use of AI technology, a video audio or document still must pass the test of 65B despite the possible fact that the video audio or document might not be processed with a human being in charge of the AI system in order to testify to the workings or output of AI. Therefore, a combination of these rulings increases the level of evidentiary requirements toward the emerging technologies.
In addition to procedural effectiveness, the Indian judiciary has also been cautious with regard to the ethical aspects of technologically driven criminal justice. In the case of the Selvi v State of Karnataka (2010), involuntary use of narco-analysis, polygraph tests and brain-mapping were discussed, and their validity was raised as an issue by the Supreme Court. The Court found that the use of such techniques that were forced on an individual against their will was a violation of Article 20(3) of the Constitution, that safeguards against self-incrimination. Although this case was not about Artificial Intelligence, the same principles can be applied in the future where in aid of the guilt and the defence here is the use of Artificial Intelligence to produce behavioural profiles, psychological assessments of accused individuals. So, when such profiling happens without the consent or ability to challenge any methodology involved, it may constitute a form of due process violation, as well as bodily autonomy. This is most significant at the time when India is thinking about the adoption of AI tools in criminal investigations, predictive analytics, and biometric analysis.
In 2023, a petition challenging deepfake videos available on social media was filed as a public interest litigation (PIL) in the Delhi High Court in the regretful intervention of the regulator. The petitioner claimed that the popularity of fake videos produced by AI was creating a threat to individual reputation as well as the security of the society and elections. The Court stated that it understands the gravity of the matter, and that the Ministry of Electronics and Information Technology is to consider its object of proper statutory measures in a notice issued. In almost every corner of the world, judiciaries have already started struggling with whether or not they can accept AI-created content. The United Kingdom case is of R v. In Goldsmith (2022), it was the digital manipulation of CCTV footage without presenting it to the police. The footage was accepted as evidence by the court once experts gave confirmation of its authenticity and the defence team was allowed the chance to scrutinise the digital chain of custody in full. In the US, on a similar note, there is the case of the United States v. Chaterjee (2021) experienced the verdict of adducing evidence of the voice of artificial intelligence in an offense of fraud. The court applied the evidence to a Daubert type of scrutiny, and the prosecution had to prove that the voice, created by AI, was scientifically dependable. The judicial treatment of both cases was guarded, but open manner based on the procedural protections and expert regulation to make sure that the infringement of the trial was not allowed to the detriment of the trial.
These examples indicate that even though courts will accept the use of evidence generated by AI, they will do so in a domain of increased scrutiny and due process. With India being inevitably brought to similar challenges, such precedents in the world can become good references. The main takeaway is evident: AI-created works can be used as evidence however, only in the event that the courts have the tools, knowledge, and legal criteria to determine its trustworthiness without pushing their fairness.


Conclusion
AI is poking holes in the Indian legal evidentiary thresholds. The modern system of law, based on the premises of human authorship and purpose, is merely not built to admit evidence that is probable, medicated, and non-verifiable, in most cases. A new chapter on algorithmic evidence has to be incorporated into the Indian Evidence Act that will detail the standards of admissibility, the certification, and expert testimony.
AI literacy will need to be trained on Judges and lawyers. Technologists and law scholars should be organised in the form of expert panels, which will be tasked with the responsibility of helping the courts in authenticating AI-generated documents. The adversarial system, although necessary in a system of fairness, is not tenable when one party has access to black-boxes and the other party potentially has no way to counter them. The principles of transparency, explainability, and accountability need to become the embodiment under which the court will have to deal with AI-based evidence.
India should not fall into the same trap of the reactive lawmaking that newer, advancing technologies present around the globe. The initiative-taking, principle-based model of AI evidence is what is desired. We cannot afford to wait until there is a legal case of a wrongful conviction that is based on the evidence of deepfakes. Justice is not allowed to be behind the technology.


FAQs
Q1. Can AI-generated evidence be admitted in Indian courts?
Yes, but only if it meets the requirements of Section 65B of the Indian Evidence Act, which includes proper certification and authentication. Courts may also require expert validation.
Q2. What are the risks of using AI-generated content as evidence?
Deepfakes, synthetic audio, and manipulated data can be used to mislead courts. If not properly scrutinised, they can result in wrongful convictions or unjust outcomes.
Q3. Are there any laws in India specifically regulating AI-generated evidence?
No. While the Indian Evidence Act governs electronic records generally, there is no separate legal framework for AI-generated material. This is a significant legal vacuum.
Q4. Can an accused challenge the AI algorithm used to generate or analyse evidence?
In principle, yes. But in practice, most algorithms are proprietary or opaque, making it difficult to scrutinise their functioning without expert access or judicial directions.
Q5. What steps should be taken to ensure fair use of AI in legal proceedings?
Reforming the Evidence Act, setting up expert panels, mandating AI audits, and ensuring transparency in algorithmic processes are critical steps. Training of legal professionals is equally important.

Leave a Reply

Your email address will not be published. Required fields are marked *