Author: Revanth Roy Chelluboyina, ICFAI law school, Dehradun
I. To the Point
The rapid growth of AI-driven synthetic media, often called “deepfakes,” presents a significant challenge to modern legal systems around the globe by presenting both an incredible opportunity to create new things and causing significant harm to society by violating privacy rights of individuals; causing harm via misleading advertising or financial fraud; and compromising the integrity of democracy through manipulated news or political advertisements. Countries are struggling to balance the need to address security issues associated with this new technology and the constitutional protections afforded by freedom of expression while promoting innovation and development related to AI technologies. It is essential for nations to navigate successfully the conflict between protecting the public interest and promoting innovation when establishing regulations governing this technology. This publication will explore many of the complex legal and ethical issues faced by different jurisdictions regarding the establishment of laws related to deepfake technology. These include reviewing what is currently being done, such as requiring labels on deepfakes, imposing liability on suppliers of the platforms that host them, and establishing criminal offenses associated with this form of digital expression, to help determine how to create meaningful laws protecting the rights of citizens and institutions, while ensuring that the methods of creating deepfakes are adequately defined.
II. Use of Legal Jargon: Challenges in Regulation of deepfakes
Deepfake Regulation has a lot of problems that are not just a result of the technology itself, but also due to the difficulty in implementing laws that can be enforced. As fast as new advances are being made in AI, the regulatory authorities will never be able to keep up with the advances and keep up with the enforcement of these regulations. Some of these problems are outlined below.
The Pacing Problem: The rapid development of AI, such as Generative Adversarial Networks, will lead to an exponential improvement in their ability to produce deepfake videos. The methods used to detect deepfakes through identification of inconsistencies between expected characteristics and actual characteristics of the deepfake will generally lose effectiveness as developers improve and refine their methods of creating new deepfakes that avoid the previously identified inconsistencies.
The in-ability of Humans to Detect and Verify: Due to the rapid advances in the development of technology, it is no longer possible for a human being to visually inspect and distinguish between a genuine product and a counterfeit product.
There are currently automated methods that have been developed that produce results to assist in identifying a counterfeit or fake product, but the high rate at which the technology is advancing creates a constant improvement in the creation and use of new methods for manipulating products, which takes away the ability to verify the authenticity of products in a reasonable time frame and on a large scale. As such, because of the “liar’s dividend,” which allows valid evidence to be dismissed, the overall confidence in digital media has been diminished.
Balancing the Right to Free Speech vs. Preventing Harm: One of the key legal challenges is to develop a set of regulations to enable people to produce malicious deepfake content, while at the same time protecting individuals’ rights to use deepfakes for satire, parody, education, and art.
III. The Proof: Constitutional Tension between Synthetic Media and National Stability
The rapid rise of hyper-realistic deepfakes creates a serious conflict between constitutional rights of free expression for individuals and statutory duties of government to protect national security. The question of whether deepfakes are legally permissible (along with other forms of synthetic speech) is difficult to determine, because they exist within a “grey area” of legal definitions outlined by various statutes in the Constitution, such as Article 19(1)(a) of India, which provides an absolute right to free speech, regardless of whether the speech is deemed “fake” or “offensive.” As the Supreme Court held in the case of Shreya Singhal v. Union of India, for someone to be found guilty of committing an illegal act of speech, there must be a sufficiently high standard of proof that the speaker acted unlawfully. However, in cases where deepfakes are being made for the explicit intent of inciting communal violence, undermining the electoral process, or impersonating high-ranking military officials, they rise above mere incidents of protected speech, and become direct threats to public order and state sovereignty. The dilemma of the “Synthetic Security Paradox” demonstrates that by creating a culture of fear that is caused by prohibiting deepfakes in general, this environment is counteracted by the extraordinarily rapid and widespread dissemination of fraudulent content created with AI before any substantial verification can be completed. This calls for a move towards regulating based on potential harms caused to people by implementing a proportionality test and restricting regulation to forms of intentional harm, with the establishment of proper protections for creativity and free expression while continuing to work against malicious activity such as inciting violence or committing fraud.
Abstract
The regulation of Deepfakes marks a turning point in modern jurisprudence where technology’s “Pacing Problem” directly collides with the existing “Standard of Proportionality.” With AI-based synthetic media becoming indistinguishable from the real thing, the legal system faces both a potential for significant harm (e.g., election tampering or financial fraud) and a “Liar’s Dividend,” which reduces the level of public confidence by permitting people to claim that bona fide evidence is counterfeit. In the context of India, the tension between free expression and the need for a strong regulatory framework based on the threat of harm becomes acute. Article 19(1)(a) and the decision in Shreya Singhal v. Union of India afford considerable latitude with respect to persons’ ability to speak on any number of topics. However, since it is the State’s statutory duty to maintain peace and order, the State’s need to pursue its interests must be done within a framework focused on harm. The shift to mandatory disclosure, platform owner liability for disseminating harmful material, and verification of digital provenance under the IT Rules addresses this need for harm-based regulation through greater transparency rather than outright censorship. Whether these laws will succeed depends on their ability to be “technology neutral” while clearly delineating protections for satirical and political speech from being construed as a threat to national security that would create a “chilling effect” on the exercise of constitutional rights.
Case Laws
Shreya Singhal v. Union of India (2015): the Supreme Court of India overturned Section 66A of the Information Technology Act. This section permitted arrests for “offensive” online posts. The Court declared the provision unconstitutional because it was “vaguely worded” and had a chilling effect on the fundamental right to free speech under Article 19(1)(a). The ruling made it clear that speech can only be limited on narrow, specific grounds such as inciting violence or threatening public order, not for being simply annoying or inconvenient. This case remains an important precedent for protecting digital expression from arbitrary government censorship
Conclusion
The nuance between digital tools for expression and our current volatile political climate diminishes as the world continues to evolve into a digital economy. To mitigate the “Synthetic Security Paradox” within the context of the Digital India initiative, we require a regulatory structure. We must develop a regulatory framework that does not incorporate either extreme forms of “blanket freedom” or “blanket censorship.” We also need a technology-neutral approach to digital regulation at this time; therefore, we have greater need for a technology-neutral approach than ever before. Our regulatory structure must take into account the chilling effect of deepfakes and how they pose a serious threat to the very underpinnings of democracy as they do to the technological infrastructure for democracy. The rule of law should serve as a dam, or filter, while allowing the creation and distribution of satire while keeping out all the harmful substances (toxins) that have been created through digital media. We must ensure that we do not compromise our creative freedoms relating to digital media, including the ability to maintain citizens’ rights to privacy and providing an environment free of adverse effects from digital media with respect to elections.
FAQS
1.What is the Liar’s Dividend in the context of deepfake regulation?
The Liar’s Dividend is a phenomenon where the mere existence of deepfakes allows individuals to escape accountability by falsely claiming that authentic, incriminating evidence is actually a “fake” or “AI-generated” manipulation. This creates a state of informational uncertainty that can undermine public trust in even genuine media.
2. How does the ruling in Shreya Singhal v. Union of India protect deepfake creators?
The Supreme Court ruling established that speech cannot be restricted for being merely “offensive” or “annoying”; it can only be limited on narrow grounds like inciting violence or threatening public order. This protects creators of deepfakes used for satire, parody, or art from arbitrary censorship, as long as the content does not cross into specifically prohibited harms.
3. Does the law distinguish between a Satirical deepfake and a malicious deepfake?
Yes. Under frameworks like India’s IT Rules and the proportionality test, regulation is increasingly focused on intentional harm. While satire and parody are generally protected as free expression, deepfakes that are created with the specific intent to commit fraud, impersonate officials for social disruption, or incite violence are treated as criminal acts.