Site icon Lawful Legal

Deepfake Politics in India: Are Our Election Laws Ready for 2026?


Author: Tanishka Shakya, Symbiosis Law School

To The Point


Deepfake technology—AI-generated images, videos, and audio that can imitate politicians, change remarks, and spread controversial statements—is drastically changing Indian politics. These artificial manipulations are already changing voter perception and reshaping political communication as the 2026 elections draw near. It is evident to me as a law student that India’s legal system has not kept up with the country’s digital regulations. Deepfakes continue to operate in a risky grey area despite changes to cyber and criminal legislation. Deepfakes have already affected elections, so the debate is no longer whether they can. Whether India’s election laws are equipped to deal with this unprecedented technological threat is the true question.

Abstract


One of the most advanced and harmful types of modern misinformation is the deepfake technology, which means unprecedented threats to the electoral democracy of India. Even though India has modernised its criminal law with the Bhartiya Nyaya Sanhita, 2023 (BNS) and reinforced some of its cyber laws through the Information Technology Act, 2000, none of these legal systems explicitly acknowledges, defines, and governs AI-created political manipulation. The distinction between the misuse of technology and its legality is becoming increasingly bigger as deepfakes are used to manipulate political speeches, create communal myths, and manipulate voter behaviour. This paper has discussed the weaponisation of deepfakes in Indian politics, the key gaps in the existing legislation, and the need to use urgent AI-specific legislations to reform the system before the 2026 election cycle.

Introduction


The deepfake crisis is highly personal as a law student who is paying close attention to the digital environment of India. I find videos that are being manipulated on WhatsApp, X, or Instagram almost every week, and what bothers me most is the rate at which these types of videos are being believed by people around me. As soon as the common people are unable to distinguish between truth and fiction created by the digital means, the core of the democracy starts shaking.
In the 2024 Lok Sabha elections, a significant increase in AI generated political videos was noticeable in India. When this is the situation in the contemporary world, then it is possible that in 2026, when the AI tools will be even cheaper, quicker, and more accessible, the situation may be much more perilous. That is why this article is my effort to relate law, technology, and the democratic responsibility, and to discuss whether India is really equipped with this new form of synthetic political misinformation.
Understanding Deepfake in Simple Words:
Deepfake is a video, image, or audio file developed with the help of machine learning devices in order to make it appear entirely natural. These artificial creations in politics tend to follow the candidates, make fabricated statements or depict leaders who did or said things they never did.


The motive of political deepfakes typically entails:
Misleading voters,
Harming the reputation of a candidate,
Engaging in speech that incites hatred or communal tension or
Manipulation of elections by using lies.
It is here that the concepts of criminal law form the basis of liability such as Mens rea (intention) and Actus reus (the wrongful act) should be taken into account.

Use of Legal Jargon


Fundamentally, deepfakes imply intentional deception. Making or sharing a deepfake in order to mislead the electorate can be classified as misrepresentation, identity fraud, undue influence or even public mischief in the Indian law.


In simple terms:


Mens rea: The will to defraud election.
Actus reus: Production or distribution of manipulated material.
False attribution: It looks like a leader has said something that he/she has not said.
Public mischief: Material that has the potential to disrupt the order of the people.
Undue influence: An effort to restrict or influence the free choice of a voter.
Currently, the actions associated with Deepfake could technically be covered by:
“Information Technology Act, 2000”
“Bharatiya Nyaya Sanhita, 2023 (BNS)”
“Representation of the People Act, 1951 (RPA).”
Nevertheless, all of these laws do not explicitly refer to deepfakes or artificial intelligence. The result of this legislative silence is a worrying loophole particularly in the context of the increasing pace at which deepfake tools are being developed, compared to the pace at which India is responding to them.

The Proof:


In the 2024 Lok Sabha elections, deepfake videos were disseminated via social media at a terrifying rate. There were those clips where political leaders were seen to support other parties and there are those where speeches have been edited or comments of some sort fabricated. These incidents demonstrated two embarrassing facts first, India is extremely prone to these types of attacks because of a lack of digital literacy and very rapid content-sharing infrastructure; and second, our legal framework is so old-fashioned that it may not be able to recognize and penalize AI-based fakery without conducting a comprehensive technical examination.
Legal Landscape in India: What Exists and what Doesn’t:
The most concerning fact is that India does not have a specific or dedicated deepfake Laws. Instead, the law enforcers follow the preexisting laws that are incomplete when it comes to AI-generated content.
Deepfakes can be criminally charged with such offences as impersonation (Section 66D), obscene or morphed content (Section 67), and blocking harmful material (Section 69A) under the Information Technology Act, 2000. Nonetheless, such provisions were already written way back when AI-driven manipulation could not happen. The IT Act is largely responsive, addressing conventional cybercrimes, but not contemporary misinformation of elections. It is like trying to squeeze a twenty-year-old law mould to the deepfakes because this is stretching its coverage.
In 2023, the Bharatiya Nyaya Sanhita (BSN) replaced the IPC, although it does not refer to deepfakes and synthetic media. Such offenses as defamation, forgery, fraud, manipulation of identities and public mischief may be applied to political deepfakes, but only in a broad sense. A fake that provokes violence can be classified under the category of public mischief, but there is no clear attitude towards AI-informed falsification in the law. The BNS lost a significant chance to add such terms as AI impersonation or synthetic content.
Representation of the people act, 1951 (RPA) seeks to guarantee the sanctity of elections through punitive measures of false statements and undue influence. Ideally, it would be undue influence to share a deepfake to influence the voter behaviour. However, this makes them extremely difficult to enforce due to the fact that deepfakes are anonymous, can be spread fast and are difficult to trace. The RPA was written in an era of pre-digitality when fake speech-generating AI did not exist.
Why Deepfakes in Indian Elections Are risky?
The digital landscape of India, including more than 700 million users of the internet, the widespread of social media platforms, the high levels of political polarisation, and the lack of awareness of AI tools, is the ideal environment of synthetic misinformation. Deepfakes may create speeches, create communal words, make leaders support competitors, or cause unrest all in a few minutes. The majority of the voters do not even check the sources; some seconds of manipulated video may change the public opinion before the fact-checkers can take action. When the truth is uncovered, it is usually too late.
Election Commission Efforts, Positive but limited.
The Election Commission of India (ECI) has asked political parties to watermark AI-generated content, disclose when material is synthetic, quickly remove misleading videos, and improve fact-checking processes. While these actions show awareness, advisories do not carry legal weight. Without legal support under the RPA or a new regulation specifically for elections, the ECI cannot enforce penalties or guarantee compliance during intense campaigns.

Case Laws


Although there is no judgment directly on deepfakes, India has had a number of landmark cases that give an idea as to how the judiciary looks at misinformation, public order, and electoral integrity.


Shreya Singhal v. Union of India (2015)
While the Supreme Court did strike down Section 66A, it stated that the State can regulate misinformation that causes real harm. Deepfakes capable of disturbing public order squarely fall within this principle.


Tehseen Poonawalla v. Union of India (2018)
The Court condemned the spread of fake news that triggered mob violence. AI-generated communal videos would attract the same judicial concern because they magnify misinformation through synthetic realism.


Union of India v. Manoj Narula (2014)
The Court placed a strong focus on upholding integrity in public life. Deepfakes compromise this constitutional principle and taint the electoral process.


Bharat Kumar v. ECI (2013)
The Election Commission’s obligation to guarantee free and fair elections was upheld by the Court. In order to achieve this duty, deepfake regulation becomes crucial.

Interpretation: These rulings acknowledge the perils of false information and encourage proactive regulatory action, even though the courts haven’t specifically addressed deepfakes. This judicial strategy lays the groundwork for rules tailored to deepfakes.


Conclusion


India’s electoral integrity is seriously threatened by deepfakes, and our current legal system is ill-prepared to deal with AI-generated fraud. India’s legal system is unable to keep up with the speed and impact of synthetic misinformation because it lacks a fast takedown method, a legal definition of “deepfake,” an offense for AI impersonation, insufficient platform accountability, and limited Election Commission powers.
India must quickly enact a Deepfake Regulation Act, modify the RPA and BNS to identify AI-driven manipulation, require the disclosure of AI-generated content, facilitate quick cyber takedowns, and increase digital literacy in order to safeguard the 2026 elections. Only a cutting-edge, AI-ready legal framework can protect voters from false information and maintain India’s democracy’s legitimacy.

FAQS


1. What is a deepfake in politics?
A video, image, or voice produced by AI that mimics a real person in order to deceive voters.


2. Are deepfakes prohibited in India?
Not directly, however similar crimes like public mischief, forgery, identity fraud, and impersonation can be relevant.


3. Are deepfakes able to affect elections?
Indeed. A single deepfake that goes viral has the power to change public perception or increase conflict within a community.


4. Which laws are in effect right now?
The IT Act of 2000; Bharatiya Nyaya Sanhita, 2023; Representation of the People Act, 1951.


5. What changes are required?
Platform liability, AI-specific election regulations, special deepfake legislation, and expanded Election Commission authority.

Exit mobile version