Author: Dev singla, student at Geeta Institute of Law
To the Point
Deepfake technology uses artificial intelligence, especially deep learning, to create highly realistic but fake images, audio, and videos. This poses a growing threat to the justice system, particularly in how evidence is created, presented, and challenged in courts.
In a legal context, deepfakes raise two major concerns:
Evidentiary Manipulation:
Deepfakes can be used to falsify audio or video evidence in a way that’s extremely difficult to detect. This threatens the foundational principles of due process and fair trial, as courts heavily rely on digital evidence in criminal and civil cases.
Threat to Witness Credibility and Public Trust:
Deepfake videos impersonating witnesses, judges, or lawyers could distort public perception and damage the credibility of the judicial process. In high-profile or politically sensitive cases, such misuse could even trigger social unrest or loss of faith in the courts.
Parallelly, there’s increasing debate over whether AI, like the technology that powers deepfakes, can perform judicial tasks. Some countries have introduced AI tools to assist judges in tasks such as:
Predicting case outcomes,
Drafting judgments, and
Speeding up case allocation.
However, completely replacing judges with AI or algorithms is neither practical nor desirable. Judgment requires more than data — it involves moral reasoning, empathy, cultural context, and human discretion, none of which AI can fully replicate.
The real issue is not whether AI can replace judges — it’s how courts can balance the use of technology while preserving the human values of justice.
Use of Legal Jargon
To understand the legal challenges deepfake technology brings, it’s important to know a few key terms that come up frequently in legal discussions:
Due Process of Law –
A constitutional guarantee that every individual must be treated fairly and justly by the legal system. If deepfake evidence is used in a case, it can violate this principle by misleading the court.
Admissibility of Evidence –
Courts only accept evidence that is authentic, relevant, and not misleading. Deepfakes challenge this by creating doubts about whether video or audio recordings are real.
Mens Rea and Actus Reus –
For criminal liability, courts look at both the intention (mens rea) and the act itself (actus reus). Deepfake content can confuse both, especially if it shows someone “doing” something they never did.
Burden of Proof –
In criminal cases, the prosecution must prove the accused is guilty beyond a reasonable doubt. If deepfake content is used to frame someone or plant false evidence, it shifts the burden unfairly.
Natural Justice –
Includes the right to a fair hearing and unbiased judgment. If courts rely on manipulated evidence or AI makes decisions without human reasoning, it may go against the principles of natural justice.
Judicial Discretion –
Judges use discretion to apply law based on facts, context, and ethics. AI lacks the human experience and sensitivity needed for such decisions.
In short, while the law evolves to handle digital threats, deepfake technology directly challenges core legal principles like fairness, authenticity, and judicial independence
The Proof
Deepfake Crimes Are Rising
According to a report by Deeptrace Labs (now Sensity AI), deepfake videos doubled between 2019 and 2021, with over 96% being used for misinformation or non-consensual content. In India, deepfakes have been misused in political campaigns, pornography, and celebrity impersonation, raising alarms about digital safety.
India Lacks Specific Legislation
While India has laws under the Information Technology Act, 2000 and IPC sections like 500 (defamation) and 469 (forgery for the purpose of harming reputation), there is no dedicated law to regulate deepfake creation or circulation. The IT Rules, 2021 only touch upon “fake content” in a limited way.
International Recognition of the Threat
The European Union’s AI Act (2024) has identified deepfakes as “high-risk AI systems” and has proposed mandatory labeling and transparency for such content. The US Department of Defense and UK Parliament have also issued guidelines for regulating synthetic media.
AI in Judiciary – Real-World Examples
In China, AI tools like “Xiao Fa” assist judges by recommending judgments, but final decisions are still made by human judges.
In Estonia, the government is testing AI-based judges for resolving small civil claims (under €7,000).
In India, the Supreme Court AI Committee has developed “SUPACE” (Supreme Court Portal for Assistance in Court’s Efficiency) to assist judges in legal research, but not decision-making.
Deepfakes as Evidence: A Real Concern
In State of Kerala v. Deepak (2022), the High Court rejected a video that was alleged to be doctored, highlighting the need for proper forensic checks and chain-of-custody authentication in digital evidence.
The Supreme Court has also ruled in Anvar P.V. v. P.K. Basheer (2014) that electronic evidence must comply with Section 65B of the Indian Evidence Act, emphasizing the need for authenticity and certification.
These examples show that while AI and deepfake tech are growing fast, the legal system is still catching up. Without proper safeguards, deepfakes can lead to miscarriage of justice, false accusations, or wrongful acquittals.
Abstract
Deepfake technology has rapidly evolved from a niche innovation into a serious global concern. By using artificial intelligence to create highly realistic but fake videos, audios, and images, deepfakes pose a direct threat to truth, privacy, and the legal system. Courts, which depend heavily on reliable evidence and fair trials, are particularly vulnerable to manipulated digital content.
At the same time, rapid advancements in AI have sparked debates about whether technology can support — or even replace — human judges. Countries like China and Estonia are experimenting with AI-assisted judicial tools, while India has introduced systems like SUPACE to assist judges, not replace them.
This article explores both these critical issues: the legal risks posed by deepfakes in judicial processes, and the question of whether artificial intelligence is capable of taking over judicial decision-making. While AI can enhance court efficiency and reduce backlog, core aspects of justice — like empathy, discretion, and moral reasoning — remain beyond the reach of machines.
Case Laws
Anvar P.V. v. P.K. Basheer (2014) 10 SCC 473
The Supreme Court ruled that electronic records must meet the requirements under Section 65B of the Indian Evidence Act, 1872, to be admissible. This case is vital when dealing with deepfakes, as it stresses the importance of authentication and certification of digital evidence.
Tukaram S. Dighole v. Manikrao Shivaji Kokate (2010) 4 SCC 329
The Court held that visual media must be carefully scrutinized before being accepted as evidence, especially in political or sensitive matters. This ruling supports the idea that manipulated content, such as deepfakes, can mislead courts and must be treated with caution.
State of Kerala v. Deepak (2022 Kerala HC)
In this case, the Kerala High Court rejected alleged video evidence due to a lack of proper forensic verification. It highlighted the increasing risk of tampered or deepfake content being presented in court without appropriate checks.
Shafhi Mohammad v. State of Himachal Pradesh (2018) 5 SCC 311
The Supreme Court emphasized that electronic evidence should be admissible even without a certificate under Section 65B, in certain circumstances, to ensure justice is served. This flexibility, however, raises concerns when applied to potentially fake digital content like deepfakes.
Justice K.S. Puttaswamy (Retd.) v. Union of India (2017) 10 SCC 1
Recognized the Right to Privacy as a Fundamental Right under Article 21. This is relevant to deepfakes, especially when they are used to create unauthorized and harmful representations of individuals, violating their dignity and personal liberty.
Riley v. California (U.S. Supreme Court, 2014)
While not an Indian case, this U.S. decision held that digital content on a device requires strong protection and cannot be freely searched without a warrant. It emphasizes the growing role of digital integrity and privacy in modern legal systems, especially relevant in cases involving deepfake data.
Conclusion
Deepfake technology represents both a technical marvel and a legal nightmare. While it showcases the capabilities of AI, its misuse can seriously damage the integrity of judicial proceedings, public trust, and the rule of law. Courts must now face the challenge of determining what is real and what is fabricated — a task that becomes harder as technology advances.
On the other hand, artificial intelligence is also being viewed as a solution to modern judicial problems like case backlogs, research overload, and administrative inefficiencies. However, the question of whether AI can fully replace human judges reveals a clear answer: no, at least not yet, and perhaps never entirely.
The role of a judge involves not just interpreting the law, but applying it with fairness, empathy, context, and moral reasoning — qualities that AI cannot replicate. Technology can assist, but not replace, the human mind in delivering justice.
Moving forward, what is needed is a balanced approach:
Stronger laws and digital forensics to detect and punish the misuse of deepfakes, and
Ethical use of AI to improve judicial efficiency without compromising human values.
The future of justice must be one where technology supports the law, not threatens it — and where human judges remain at the heart of the courtroom, aided, not replaced, by machines.
FAQS
1. What is deepfake technology?
Deepfake technology uses artificial intelligence, especially deep learning, to create fake but highly realistic audio, video, or images that can mislead people into believing false events or statements.
2. Why are deepfakes a threat to the legal system?
Deepfakes can be used to create fake evidence, impersonate witnesses or judges, and mislead courts. If not detected, they can lead to wrongful convictions or acquittals, violating the principles of fair trial and justice.
3. Is there any law in India that deals with deepfakes specifically?
No, India does not yet have a dedicated law on deepfakes. Existing provisions under the IT Act, IPC, and laws on defamation or forgery are used, but they do not directly address the unique challenges posed by deepfakes.
4. Can AI replace human judges in courts?
AI can support judges by helping with research, case management, and analytics. However, it cannot replace human judgment, empathy, or the ethical reasoning required in legal decisions.
5. Has any country started using AI in courts?
Yes, countries like China and Estonia have experimented with AI-assisted tools in judicial systems. However, in all cases, human judges still make the final decision.
6. What steps can be taken to regulate deepfakes in India?
India needs:
A specific law on synthetic media or deepfakes,
Stronger digital forensics infrastructure,
Awareness campaigns, and
Global cooperation to trace and punish creators of harmful deepfake content.
