Admissibility of Deepfake Evidence in the Court of Law

Author: Kanak kumari, symbiosis law school, nagpur

To the Point
Deepfakes refers to the fake video, images, audio which is created digitally through artificial intelligence in order to trick someone into believing that the image/ video/ audio, etc are true in its nature. It is a technology which blurs the border between truth and fakery effectively.  
Deepfakes are mainly produced using high-powered machine learning and/or artificial neural networks. Generative Adversarial Networks (GANs) and autoencoders become two major technologies in this field. GANs are designed to work in a competition fashion, with two neural networks a so-called generator neural network, which synthesizes the content on the basis of the real data and a so-called discriminator neural network, which appraises the reality of synthetically generated content as opposed to the real ones. Additional advanced methods used in the creation of deepfakes were Variational Autoencoders (VAEs), Recurrent Neural Networks (RNNs) to learn sequential signal such as speech and track of lips, and Convolutional Neural Networks (CNNs) to analyse or manipulate images frame-stretching.  
The deepfake methods are flexible and allow a wide variety of manipulations: face swapping, facial reenactment, talking face creation, and editing of facial attributes. In addition to human appearances, the technology can also create or modify objects and scenes, animals or even the entire surroundings in videos and images to create hyper-realistic but artificial images and footage in virtually any circumstances. Another breakthrough in this direction is the speed at which the technology is advancing because deepfakes are coming closer and closer to reality and more available to ordinary people. What was formerly deemed costly equipment and a highly trained skill set is now being done using free to download smart phone apps with just a few photos or video reference points, regularly just a few inches. Any person with simple computer knowledge can now generate a deepfake.  
Although the risk of deepfake-generated content abuse is depicted in most instances, the proposed technology is actually two-fold, as it can also be used harmlessly/purely and to a legal end. Deepfakes have been exploited by the entertainment sector to carry out the process known as de-aging actors, the creation of digital clones, and improving the production of movies. As an example, deepfake face-swapping technology has brought about huge savings in the operations and production costs of Disney as it was able to de-ages characters or bring dead actors back to life. Other than in entertaining, deepfakes are in use in education, and in masking identities, e.g. to prevent repairs in documentaries by obscuring interviewee faces.  
Nevertheless, these positive applications notwithstanding, there is a considerable number of threats associated with deepfakes such that they become an effective instrument of exploitation by virtue of their potent capacities that resonate with it.
Trust Erosion and Disinformation: There are capabilities of deepfake to gravely erode the trust of people by making it hard to tell between real and altered content. False information distributed through deepfakes has the potential to damage reputations, confuse in important situations, and have a huge impact on the opinion of the people or elections.
Fraudulent Act and Identity Theft: With the possibility of impersonation and identification of people so accurately, a range of fraudulent activities becomes open. Scammers could use cloned voices to lie about receiving the money and thus trick a person into sending it or allowing them access confidential information.
Non-consensual and Image-Based Abuser: This is one of the most disturbing and common abuses of the deepfakes: creating a non-consensual explicit content. It is a kind of abuse through images that could be used when you want to show revenge or blackmail or harass someone online.
Bullying and Hoaxes: Another way that deepfakes may be utilised is in terms of bullying as demonstrated in instances where fabrications of cheerleading images occur.  

Abstract
In this article, the writers will study the intricate issues that deepfake techniques present to the evidentiary admissibility in a court of law. Deepfakes, artificial media created through the help of advanced artificial intelligence, break the borders between albeit and truth and thus fundamentally question traditional rules of evidence about authenticity and reliability. In a discussion of fundamental digital evidence principles and the new case law this article explains how judicial systems are struggling to cope with this new evidence. It ends by offering a multi-dimensional solution to the way forward, urging the need to have strong legislations, high-tech protection and required judicial fine-tuning to maintain the sanctity of justice in an ever more synthetic reality.


The Proof
The ambit of digital evidence refers to any information which is stored digitally and this may be used in the court of law. These are the corner stones to the gathering, keeping and listening to all evidence so that it is not corrupted when it comes to trial. To be capable of being admitted as evidence in a court of law, digital evidence has to meet a few vital requirements:  
Relevance: Evidence should have a valid bearing to the facts and the issues on the case at hand. It has to facilitate the establishment or disapproval of an instrumental reality. The court may exclude irrelevant evidence, which may be hearsay or casting speculations on something.  
Authenticity: response the evidence has to be of truthful nature and should be provable that there has been no manipulation of the evidence or even fabrication. To prove the authenticity, close documentation of origin of evidence, its history and the exact process of collecting and storing evidence should be done. Cryptographic hashes, digital signatures and metadata could represent potential means in the proving authenticity.  
Reliability: The documentation has to be credible and sound to be right. Reliability is defined based on carrying out appropriate collection, preservation and analysis procedures that do not compromise evidence. Such evidence as the proficiency of forensic examiner, validity of tools employed and scrupulous account of the whole procedure are considered key issues affecting reliability.  
Completeness – The evidence provided must be in full and impartial ground to the facts eliminating any possible alternative explanations or backgrounds.  
The Chain of Custody is at the centre of the two elements of reliability and authenticity. This can be defined as the created pathway that identifies the flow and progress of digital evidence since the time of its capture to when it will finally appear in trial. The very essence of deepfakes to be, as stated by the authors, deeply realistic but artificial and the fact that deepfakes are aimed at delivering a completely realistic copy to pass as a real person or event firmly opposes the very precepts of digital evidence. The synthetic media are becoming harder to separate with real evidence. This sophisticated technology of deepfakes is a direct assault on the assumption of integrity which has anchored the conventional digital evidence. Whereas the possibility of data alteration in general has long been considered, in conjunction with the sufficiency of courts, to be not enough to prove data untrustworthiness, deepfakes expand this possibility to a completely new degree of plausibility. This compels the agnostic re-assessment of the concept through which authenticity can be established in the digital era. It changes the reality on the ground from showing that evidence has not been tampered with, to proving that it has not been fabricated by intelligent AI-powered scripts. A witness who knows a voice or an image may not be an adequate form of authentication anymore due to the accomplishments of deepfake technology. The legal system should therefore be ready to change and live with a new reality where digital media is no longer automatically consider as being true in the face value to be reckoned with.  

The Legal Jargon
The Indian Evidence Act 1872: 65A & B
Since the emergence of this paradigm shift due to the digital revolution, the legal system of the Indians has taken into consideration the use of technology in their proceedings by making use of the Information Technology (Amendment) Act, 2000. This amendment was made to Indian Evidence Act of 1872 through creation of Sections 65A and 65B which are specifically used to clarify admissibility of electronic records and issues related to forensics as to their authenticity.  
Section 65A refers to that the contents of any electronic records have to be demonstrated by the circumstances as indicated under the Section 65B.  
Section 65B according to the falsify rule, concerns the admission of an electronic record and only electronic record issued by a computer is referred to as computer output which is produced on a paper or in an optical or magnetic media. If certain conditions as laid out in sub-sections (2) and (4) are made then such output it is considered to be a document as well.  
• Section 65B (2) conditions: The purpose of the conditions is to do with the reliability of computer system and integrity of the data flow. They provide that the computer which produces the record was in use during the material period; information of the kind being in the electronic record must have been regularly inputted into the computer; the computer had to be functioning normally then or any malfunction in the period did not interfere with the accuracy of the ordinary course of the said activities.  
• Section 65B (4) Mandatory Certificate: This entails a certified statement issued by an individual who holds a responsible official position concerning the working of the concerned device. The certificate should include the identification of the electronic record, the method used to create it, the details of any equipment. 65B (2) were properly followed. Such certificate is required in cases where copies of electronic records are to be attempted to be utilized in legal actions.  

Case Laws
State (N.C.T of Delhi) vs. Navjot Sandhu @ Afsan Guru (2005) (Parliament Attack Case):
This Parliament Attack Case was amongst the first cases the Supreme Court had to deal with pertaining to electronic evidence. The Supreme Court used the call detail records (CDRs) as evidence without the strict compliance with the procedural requirements of the Sections 65A and 65B of the IEA. This ruling was indicative of the emerging knowledge of digital evidence integrity and the proceduralistic fetters which would then be highlighted.  
Anvar P.V vs. P.K Basheer & Ors (2015):
This case was a landmark decision in the legal field dealing with electronic evidence in India and it was a case of trying to inject an element of clarity and consistency regarding the admissibility of electronic evidence. The Supreme Court clearly stated that a certificate under Section 65B(4) is a pre-conditio sine qua non in regard to admissibility of electronic records in the form of secondary evidence. The court also emphasized the great importance in determining authenticity and integrity of electronic evidence, as it is quite prone to manipulation. Most importantly, this was a judgement reverting to the previous holding of Navjot Sandhu holding that Section 65A and 65B is a special code relating to electronic evidence and therefore receives precedence over general clauses of Section 63 and Section 65 of the IEA. The court further drew a distinction with the primary evidence (such as electronic records, Section 62). According to the court, under the accordant Sections 62, there is no necessity to require a certificate of 65B.
Shafhi Mahommad contra. State of Himachal Pradesh (2018):
A differing judicial perspective was observed in this case, which tried to solve practical problems faced by parties in getting the certificate required, which was the 65B certificate. The Supreme Court opined that the certificate obligation under Section 65B (4) would be relieved when the person who files the electronic evidence has no such possession or control of the device by which the electronic record was produced. This was so as to avoid the possibility of locking out those bits of evidence that may prove useful but fail to do so because of some procedural barrier that the party involved could do little about.  

Conclusion
The historic merchant axiom, seeing is believing, as it applies to the gathering and presentation of evidence, is completely undermined as synthetic media becomes indistinguishable to the best of our knowledge, at least. It has resulted in the digital media crisis of confidence which is inability to establish the truth even when it is given in the real world, resulting in the so-called liar dividend. Excessive costs to authenticate or refute deepfakes entail further access to justice problems, posing a harsh predicament to the litigants with low finances. Additionally, the existing deepfake detecting technologies, though improving, are not reliable and unbiased besides being easy to counteract, which makes them not be fit as authenticating measures in a court of law.

FAQs
Does deepfake evidence admissible in the court of law? The applicability of any electronic record, including possible deepfakes, in the Indian courts is regulated by the Section 65B of the Indian Evidence Act, 1872, mainly. This part leaves no room through which authenticity and reliability can be slackened. Considering that deepfakes are anonymized and can be made to look much more authentic than any fabricated deepfake, it would be exceedingly difficult, or possibly impossible to fulfil these requirements in the case of a fabricated deepfake, and very likely they would need a deepfake expert to provide testimony to contradict them making the deepfake look genuine.
How do we hold the burden of proof when we present a deepfake as evidence? As a rule, it is upon the party forwarding any evidence to prove the authenticity of the same. Deepfakes present this burden to a very high extent. In case the deepfake is introduced as real, the proponent would have had to prove it a real one to the court and would plausibly do so by the testimony of experts and high-quality supporting evidence. Suppose a party accuses evidence as deepfake, they would have to produce some evidence in support of this, typically expert analysis.
Is a deepfake evidence admissible in court? No. However, Section 65B addresses electronic records that are discussed as documents, whereas deepfake by the very terms is a fabrication. It does not have the original source through which the real evidence can be obtained. That is why the deepfake, developed to deceive a person, cannot be regarded as the original work of real evidence.
What is the role of the expert witnesses in deepfake cases? Forensic experts, especially AI professionals, are also very important. The digital media can be analysed, they can describe the deepfake technology to the court of law, determine the indicators of manipulation and issue opinion about authenticity or artificiality of the evidence.
In the future, what are the implications of deepfake to justice system? Deepfake requires the constant development of laws, law enforcement techniques and even court proceedings. They are bound to cause the greater use of expert testimony, higher authentication prerequisites of digital evidence, and an expansion of corroboration in litigation. The justice should upgrade itself to be in a position to make sure that any technology must be used to promote truth rather than undermine it.

Leave a Reply

Your email address will not be published. Required fields are marked *