Site icon Lawful Legal

DEEPFAKES AND THE LAW: ADDRESSING THE THREAT OF SYNTHETIC MEDIA

Author: Dewanshi Bhatt, Bennett University

To the Point
Digital communication has taken on an unnerving new dimension with the rise of deepfakes, a combination of “deep learning” and “fake”. Deepfakes may easily replace a person’s face, voice, or mannerisms in images, videos, and audio recordings by utilizing sophisticated artificial intelligence (AI), especially Generative Adversarial Networks (GANs), frequently without the subject’s knowledge or agreement. What started out as a creative AI experiment has developed into a technical tool with broad possibilities both for innovation and damage.
Deepfakes call into question core ideas of identity, truth, and responsibility. Character assassination, financial fraud, non-consensual pornography, and political disinformation are just a few examples of how they might be abused. Fake videos featuring political figures making untrue claims, for example, have the potential to deceive the public and influence democratic results. Similar to this, women have been disproportionately targeted by deepfake pornography, which raises severe issues about consent, digital privacy, and bodily autonomy.
Deepfakes are a threat to public trust and individual rights in India, where social media use and digital access are growing quickly. Regretfully, there isn’t any specific law that deals with artificial intelligence-generated disinformation or synthetic media. Victims frequently have to rely on a combination of constitutional rights, the Information Technology (IT) Act, and the Indian Penal Code (IPC). In both prevention and redress, this disjointed legal approach has fallen short.
Furthermore, deepfake technology is developing faster than legislation. Miscarriages of justice could result from traditional evidence systems’ inability to distinguish between authentic and altered content. Verifying digital evidence that may be fake is becoming a more pressing issue for courts, law enforcement, and forensic specialists.
Therefore, confronting the legal ramifications of deepfakes entails not just penalizing abuse but also establishing technological protections, raising awareness, and acknowledging the limitations of existing legal tools. A forward-thinking legal framework must strike a balance between safeguarding constitutional and human rights, including freedom of speech, privacy, and dignity, and technological innovation.


Use of Legal Jargon
Mens rea is the term used to describe the intent or mental state of the person who is producing a deepfake.
Prima facie: Something that is obvious on the surface, as a film that has been obviously altered to spread false information.

According to the ruling in K.S. Puttaswamy v. Union of India, the right to privacy is a basic right guaranteed by Article 21 of the Constitution.
According to Section 499 of the IPC, defamation is both a civil wrong and a criminal offense.
Cybercrime: Violations of the IT Act that involve the use of computers or networks.
According to Section 67 of the IT Act, obscenity is defined as content that violates morals and decency.
Res ipsa loquitur, or “the thing speaks for itself,” is especially true in cases where the media is blatantly distorted.

The Proof
Artificial intelligence, specifically Generative Adversarial Networks (GANs), which enable the manipulation of audio, video, and pictures, is used to generate hyper-realistic digital fabrications known as “deepfakes.” Thanks to easy-to-use programs like DeepFaceLab, Zao, and Reface, these synthetic media technologies are now more widely available, allowing even non-technical people to create realistic-looking artificial content. There is a huge and concerning risk of abuse. Deepfakes have previously been used to produce multilingual spoof videos of political figures during elections in India, deceiving voters and compromising the integrity of democracy. A deepfake of Ukrainian President Zelenskyy demanding surrender brought attention to its potential in both military and psychological operations on a global scale.
Beyond politics, deepfakes are frequently employed in cybercrimes, especially when non-consensual pornographic content is targeted at women, causing serious psychological anguish and damage to one’s reputation. Additionally, they have surfaced in financial crimes; in one prominent instance in the UK, cybercriminals used deepfake voice to pose as a CEO and deceive an employee into sending €220,000. There is no specific regulation in the Indian legal system to address such synthetic media. Instead, victims are forced to rely on the IT Act and IPC’s provisions, which do not fully handle the special characteristics of deepfakes. This demonstrates the pressing need for law change to keep up with the rapid advancement of technology.


Abstract
Deepfakes AI-generated synthetic media that can remarkably accurately replicate human likeness are a result of artificial intelligence’s quick development. Although there are potential applications for this technology in areas like accessibility, education, and filmmaking, its abuse has grown to be a serious problem. Deepfakes have been used as a weapon to disseminate false information, commit financial fraud, infringe on personal privacy, and damage reputations especially those of public figures and women. Deepfakes are a major threat to democratic institutions, individual dignity, and the reliability of evidence in court cases as the distinction between genuine and fake becomes increasingly hazy.
The ramifications of deepfakes in the Indian court system are critically examined in this. It assesses if a specific legal response is necessary or if the current statutory frameworks such as the Indian Penal Code, 1860 and the Information Technology Act, 2000 are capable of handling these new issues. Additionally, it compares international regulatory approaches to this digital menace and examines seminal rulings that address privacy, freedom of speech, and technical abuse. It highlights the need for an equal approach that upholds individual rights while promoting technology progress by pointing out legal flaws and suggesting workable improvements. The objective is to make sure that legal frameworks develop in tandem with AI in order to maintain justice in a society that is controlled by technology.


Case Laws
1. Justice K.S. Puttaswamy (Retd.) v. Union of India(2017)
In this landmark decision, the Supreme Court’s nine-judge panel recognized the right to privacy as a fundamental right guaranteed by Article 21 of the Constitution. The Court emphasized that privacy includes bodily integrity, personal autonomy, and informational privacy. This decision becomes crucial when discussing deepfakes. A person’s informational privacy is directly violated by non-consensual synthetic movies, particularly when they contain intimate material or an unapproved likeness. Deepfakes frequently entail the unapproved use of a person’s voice or face, which is strictly within the parameters of the right to privacy established by this ruling.
2. Shreya Singhal v. Union of India (2015)
The Supreme Court invalidated Section 66A of the IT Act, 2000 in this case because it was ambiguous and infringed upon the right to free speech and expression. The ruling emphasized the value of having precise and well-defined legislation to control online content, even as it supported free expression in the digital age. The case emphasizes how challenging it is to strike a balance between preventing dangerous information, such as deepfakes, and allowing free expression. According to the ruling, any future laws governing deepfakes should be carefully crafted to prevent violating fundamental rights.
3.Aveek Sarkar v. State of West Bengal (2014)
The Indian legal definition of obscenity was made clearer by this case. The Court ruled that content must be evaluated from the viewpoint of the average person, not just by how explicit or nude it is, but also by whether it frequently corrupts and degenerates or caters to obscene interests. When sexually explicit content or synthetic pornography is shared without permission, this ruling becomes pertinent in the context of deepfakes. If deepfake pornography satisfies the obscenity level established in this case, it may be subject to punishment under Sections 67 and 67A of the IT Act.
4. Khushboo v. Kanniammal (2010)
The Supreme Court upheld the rule that, absent a clear provocation to violence or civil unrest, voicing divergent opinions is not illegal. This case presents difficult issues regarding how to differentiate damaging synthetic content from protected expression in the overall setting of deepfakes. Deepfakes meant to deceive or slander people may not be protected by Article 19(1)(a), even though comedy and parody may be, especially when malevolent intention (mens rea) can be demonstrated.


Conclusion
The emergence of deepfake technology is a double-edged sword: although it has enormous potential for advancements in accessibility, education, film, and digital creativity, it also poses significant threats to national security, democratic integrity, privacy, dignity, and reputation. Deepfakes have the capacity to create convincing audio-visual information, which puts the credibility of evidence in court processes at risk, erodes confidence in digital media, and makes it difficult to distinguish between fact and fiction. The law must change along with technology.
Due to the lack of a specialized legal framework to control deepfakes, victims in India are forced to rely on antiquated and disjointed legislation found in the Information Technology Act and the Indian Penal Code. These rules are not all-inclusive nor specifically designed to handle the particular difficulties presented by synthetic media, even while they provide certain partial remedies, such as fines for cyber defamation, profanity, and security infringement. Furthermore, law enforcement organizations frequently lack the technological know-how or legal clarity necessary to successfully prosecute such crimes, which delays justice or allows offenders to go free.
India should think about passing specific legislation on the abuse of artificial intelligence and synthetic media in order to combat this growing threat. This legislation should include protections such as obligatory watermarking, the content authenticity verification, fast-track courts for digital harms, and severe penalties for creating deepfakes with malicious intent. To make sure that legal reforms are socially conscious, constitutionally sound, and technically possible, technicians, legal experts, civil society, and advocates for digital rights should all be consulted during the formulation process. Digital literacy initiatives and public awareness campaigns are also crucial for assisting users in recognizing and reporting synthetic content.
In the end, preventing deepfakes necessitates a multifaceted strategy that combines international collaboration, ethical AI development, education, and legislation. The only way to guarantee that AI’s positive effects are maximized while shielding people from its negative effects is through a strong, progressive legal structure. Now is the moment to take action, before the age of fabricated deceit claims the life of truth itself.

FAQ’s
Q1. How does one define a deepfake?
Synthetic media, typically audio or video, that employs artificial intelligence to produce incredibly lifelike manipulations that make individuals seem to be saying or performing things they never did is known as a deepfake.

Q2. Is it unlawful to create a deepfake in India?
Deepfakes are not specifically illegal under any laws. However, existing laws such as Section 66E (privacy breach), Section 67 (obscenity), and Section 499 (defamation) of the IPC and IT Act can prosecute such activities.

Q3. Is it possible for victims of deepfakes to file a lawsuit?
Indeed. Under current legislation, victims can report instances of cyber defamation, privacy violations, and the publication of pornographic material. There are also civil law remedies (damages).
Q4. How is deepfake regulation being approached internationally?
The EU’s AI Act deals with synthetic content, whereas nations like the US have passed legislation (such as the DEEPFAKES Accountability Act). China requires AI-generated media to be labelled. These are models that India can use.
Q5. What are some ways to control deepfake technology without impeding innovation?
Regulations of harmful use, ethical artificial intelligence design, and advancement in AI detection technologies to confirm authenticity are all components of a balanced strategy.
Q6. Are deepfakes admissible as proof in court?
Yes, but proceed with caution. Courts are required to verify electronic evidence under Section 65B of the Indian Evidence Act of 1872. Because edited videos might look authentic, deepfakes can make evidential standards more difficult to follow. Admissibility, the chain of custody, and the requirement for sophisticated forensic tools to confirm authenticity are all brought up by this.
Q7. Are deepfakes covered by any international conventions or treaties?
There isn’t a global convention on deepfakes at the moment. Nonetheless, certain guidelines for controlling harmful AI use are provided by international frameworks such as the EU AI Act and the Budapest Convention on Cybercrime. These tools have a strong emphasis on international collaboration, which is essential given the worldwide scope of cybercrimes.
Q8. How are tech firms dealing with the issue of deepfakes?
Deepfakes are now being flagged or eliminated by major sites like Google, TikTok, and Meta (Facebook) utilizing AI-detection algorithms. Some have put laws into place that mandate content labeling or prohibit the use of deceptive synthetic media. On the other hand, proactive legislation is still evolving, and enforcement is inconsistent.
Q9. How much does it cost to make or distribute a deepfake in India?
Although there isn’t a specific legislation, the IT Act’s Section 499 (defamation), Section 67 (obscene content), and Section 66E (violation of privacy) all carry penalties. Depending on the offense, penalties can range from fines to up to three to five years in jail.
Q10. How can people guard against the misuse of deepfakes?
People should keep a close eye on their online persona, refrain from posting private or sensitive information, and notify cybercrime units of any suspected abuse. One way to lessen the chance of being targeted is to watermark original content, use privacy settings, and use reverse image search tools.

References

1 Information Technology Act, 2000 in India is Act No. 21 of 2000

2 Indian Penal Code, 1860 (Act 45 of 1860)

3 Justice K.S. Puttaswamy (Retd.) v. Union of India Writ Petition (Civil) No. 494 of 2012.

4 Shreya Singhal v. Union of India 2015 SCC Online SC 248.

5 Aveek Sarkar v. State of West Bengal (2014) AIR 2014 SUPREME COURT 1495, 2014 (4) SCC 257, 2014 AIR SCW 1201, AIR 2014 SC (CRIMINAL) 800, 2014 (3) AJR 552, (2014) 2 MAD LW 944, 2014 (1) CALCRILR 810, 2014 (1) MADLW(CRI) 585, 2014 (2) SCALE 16, (2014) 137 ALLINDCAS 211 (SC), 2014 CALCRILR 1 810, (2014) 2 DLT(CRL) 244, (2014) 3 MH LJ (CRI) 166, (2014) 1 ORISSA LR 833, (2014) 2 CRILR(RAJ) 523, (2014) 2 ALLCRIR 1360, (2014) 1 MADLW(CRI) 677, 2014 (2) SCC (CRI) 291, (2014) 3 RAJ LW 2449, (2014) 1 KER LJ 695, (2014) 57 OCR 865, 2014 CRILR(SC&MP) 523, 2014 CRILR(SC MAH GUJ) 523, 2014 (1) ABR (CRI) 757, (2014) 1 MAD LJ(CRI) 585, (2014) 1 KER LT 62, (2014) 1 RECCRIR 919, (2014) 1 CURCRIR 447, (2014) 2 SCALE 16, (2014) 1 UC 558, (2014) 1 CRIMES 218, (2014) 2 ALD(CRL) 131, 2014 (4) KCCR SN 341 (KAR)

6 Khushboo v. Kanniammal (2010) 5 SCC 600

Exit mobile version