Site icon Lawful Legal

Deepfakes and Democratic Integrity: Legal Challenges in the Age of Synthetic Media

Author: Swaraj Pandey, Amity University Lucknow AUUP


Abstract
The paper will analyse that the Indian elections and people are threatened by AI-generated content and the so-called deepfake material and the sufficiency of the Indian legal and regulatory framework to address that risk. Deepfakes – fabricated videos, images, or audio that convincingly mimic real persons – have already surfaced in the 2024 general election (for example, “resurrected” video clips of Jayalalithaa and Karunanidhi in Tamil Nadu, and falsified endorsements by Bollywood celebrities). These examples indicate that false media can easily deceive the voters and undermine trust in the democratic procedure. We review India’s existing laws (IT Act 2000/Rules, IPC provisions, the Representation of People Act, etc.) and recent regulatory measures (Election Commission advisories on deepfakes) and find that they offer only partial cover. Notably, courts have begun to protect personality and publicity rights in deepfake cases (e.g., Arijit Singh v. Codible Ventures), but there is no comprehensive statute on synthetic media. Comparative perspectives (such as the EU’s draft AI Act and U.S. state “right of publicity” laws) suggest models for reform. I argue that India needs clearer rules – possibly a dedicated law on synthetic media – coupled with platform accountability and public awareness initiatives.


Introduction
The deepfake technology is based on the use of artificial intelligence to create a fake recording, video, or picture that is really close to relying one of the living people. In theory, such synthetic media can be harmless (e.g., entertainment), but in practice deepfakes have been used to deceive voters, tarnish reputations and distort political discourse. The recent elections in India, to give an example, saw their use to reanimate dead leaders and generate discouraging messages through AI by political actors. Popular Tamil Nadu leaders Jayalalithaa (d. 2016) and M. Karunanidhi (d. 2018) were cloned into campaign videos – J. Jayalalithaa’s digital avatar criticized the incumbent state government, while AI-M. Karunanidhi praised his son’s leadership. Likewise, falsified videos of the Bollywood actors Ranveer Singh and Aamir Khan endorsing oppositional candidates circulated and a falsified audio clip of Rahul Gandhi inaccurately announcing he had quit the party was created by AI.
In this article I analyse the problem and propose how India should respond. First, I describe the impact of synthetic media on Indian elections, drawing on recent incidents and studies. Next, I examine the existing legal framework in India: statutes, election rules and regulatory advisories that could apply to deepfakes, noting the gaps. I then review relevant case law and judicial responses (including new Indian cases on personality rights and the Delhi High Court’s actions). Finally, I discuss the enforcement challenges and policy implications – for instance, the difficulties in detection, free speech considerations, and institutional capacity – before offering recommendations. Throughout, I use legal terminology (e.g., persona rights, intermediary liability, forgery, electoral offences) but strive to explain concepts clearly.


Deepfakes and Elections: Emerging Threats
AI-driven synthetic media has already been used to influence Indian voters. In the 2024 Lok Sabha elections, media reports documented numerous deepfake campaigns:


Resurrected leaders. Tamil Nadu saw “ghost appearances” of iconic but deceased leaders. J. Jayalalithaa appeared via AI in a voice message denouncing the incumbent party, and a digital version of former CM Karunanidhi gave speeches praising his son, the current CM. These videos – approved by the parties themselves – aimed to sway voters by invoking nostalgia. AI actors (like Muthuvel Karunanidhi’s avatar) used his signature style to commend the party’s achievements. Observers noted that such “resurrection” clips grab attention and are cheaper than organizing rallies.


Cloned celebrity endorsements. Deepfake videos featured Bollywood stars as pundits for political causes. For example, AI-generated clips circulated of actors Ranveer Singh and Aamir Khan criticizing Prime Minister Modi and urging votes for the opposition Congress. Both clips were later debunked as fakes after going viral. Similarly, an AI-cloned audio purported Rahul Gandhi’s resignation. Even religious imagery was misused: one report described a Muslim political figure singing Hindu devotional songs (likely AI-manipulated) to mislead audiences.


Microtargeting voters. Parties deployed generative AI to personalize outreach. For instance, on WhatsApp some workers sent voters pre-recorded messages in a local dialect, but the speaker was not a human campaigner – it was an AI-generated avatar mimicking the politician’s voice. Chatbots calling constituents in candidates’ voices (labelled as AI) were also reported. These tactics gave parties the ability to “talk” to millions individually at low cost.
These concrete cases highlight the risks: deepfakes can subvert free choice and truth. Social-media platforms were flooded with millions of AI-altered videos, challenging voters’ ability to trust what they see and hear. As one analyst wryly noted, “we only tend to fact-check videos which don’t align with our preconceived notions,” so voters may unwittingly accept a convincing AI video as genuine. vulnerability of unprepared citizens.


Indian Legal and Regulatory Framework
India has no resolute “deepfake law,” but several existing laws and rules partly apply. In this part I describe the statutory provisions, regulations, and guidelines. (Key legal terms and jargon are explained as needed.) These include the Information Technology Act and rules, criminal statutes under the IPC, electoral laws, and recent policy actions (such as guidelines by the Election Commission of India (ECI)). We then assess their scope and gaps.


Information Technology Act & Rules: The IT Act 2000 and its rules were designed to cover cyber offences broadly. Relevant provisions criminalize identity theft (s. 66C) and cheating by personation (s. 66D), which can cover fraudulent imitation of someone online. The Act also addresses obscenity/child pornography (which would apply to deepfake porn) and other harms. Crucially, the IT (Intermediary Guidelines and Digital Media Ethics Code) Rules 2021 place obligations on online platforms. Intermediaries (like Facebook or Twitter) must remove “unlawful information” quickly once notified. “Unlawful information” under the Rules explicitly includes content that “deceives or misleads the addressee about the origin” or is “patently false, untrue or misleading.” While these terms were drafted broadly (targeting misinformation, hate speech, etc.), they can encompass deepfakes that mislead viewers. The Ministry of Electronics and IT (MeitY) has repeatedly issued advisories reminding platforms to counter “malicious synthetic media” and comply with takedown obligations.


Indian Penal Code (IPC): Several IPC sections may catch deepfake-related mischief. Forged documents or data (s. 463) and the use of forgery to harm reputation (s. 469) can apply. For example, a digital video claiming to quote a politician could be considered a “forged record” intended to harm that person’s reputation. The IT Minister has even equated deepfake peddling with forgery, warning that victims could file cases under the forgery provisions of the new Bharatiya Nyaya Sanhita (BNS) 2023 (which re-enacts many IPC provisions). Section 505 IPC (promoting enmity by false statements) might cover a deepfake that spreads hatred between communities. In sum, the criminal law provides some “catch-all” offences for fraud, forgery and hate speech, but courts must interpret them in context.


Election Laws: The Representation of the People Act 1951 contains election-specific offences. Notably, s. 123(2) and (4) RPA prohibit false statements about a candidate’s personal character or conduct. A deepfake that fabricates a candidate saying something inflammatory could qualify as a false statement under these clauses. In practice, the ECI’s Model Code of Conduct also bars distortion of a rival’s personal life or malicious propaganda. In the 2024 elections, ECI advisories explicitly forbidden use of social media to spread “misinformation” or impersonate others. Most relevantly, in May 2024 the ECI instructed political parties that any deepfake content violating election rules must be taken down within three hours of notice.
In brief, India’s existing laws address deepfakes indirectly – via identity-fraud, defamation, forgery, etc – but none specifically mention AI-synthesized content. This leaves a legal grey area. As one petition in the Delhi High Court argued, “the existing legal framework… is fairly insufficient to address the harms of deepfake technologies.” The next section explores how courts have actually handled deepfake disputes under this uncertain framework.


Judicial Responses and Case Law
Deepfakes and issues of personality rights are emerging as a challenge to Indian courts to resolve. Two threads emerge: (1) protection of publicity/personality rights of individuals whose likenesses are misused, and (2) broader public-law cases prompting the government to act.


Persona and Publicity Rights: A notable recent decision is Arijit Singh v. Codible Ventures LLP (Bombay HC, 2024). The famous singer Arijit Singh obtained an ex-parte injunction against dozens of AI platforms that were cloning his voice. The court ordered removal of all infringing content and even suspension of domain names infringing his persona. Significantly, the judge recognized that a celebrity’s voice, mannerisms, and personality traits are part of their protected persona. Drawing on prior precedents (the Karan Johar v. Indian Pride and Anil Kapoor v. Simply Life India cases), the court held that unauthorized use of these attributes may breach an artist’s legal rights. In Singh’s case, AI-generated songs and ads featuring his cloned voice were prima facie defamatory and exploitative, so relief was granted. The judgment emphasized that a “celebrity’s right of endorsement” is a livelihood source and cannot be destroyed by unlawful deepfake merchandise. In short, while India has no general “right of publicity” statute, courts are enforcing analogous protection via trademark, contract, and personality right doctrines. Arijit Singh illustrates the “pro-publicity” stance: even though it was only an interim order, it underscores that voice and image are legally safe guardable. (Analysts note that moral rights under the Copyright Act (s. 38B) could also apply to deepfake voice recordings, though the court did not rule on that here.)


Public Interest Litigation – Delhi High Court (2024). Deepfake issues have also been raised by public spirited litigants. In Chaitanya Rohilla & Rajat Sharma v. Union of India (WP(C) 6560/2024), petitioners sought Court orders on unregulated synthetic media. In November 2024, a bench of Justice Manmohan and Justice Gedela directed urgent action. While not deciding on specific complaints, the Court mandated that the central government form a multi-stakeholder Committee on Deepfakes within one week. The Committee was instructed to consider petitioners’ suggestions and study other jurisdictions (especially the EU) on synthetic media law. The Court observed that “every day’s delay” in addressing deepfakes causes “immense hardship to the public” and set a three-month deadline for the report. This order shows judicial awareness of the problem’s gravity and presses the executive to frame policy (for now, non-justiciable questions).


PIL on Deepfakes (May 2024). Earlier, the Delhi HC was asked to intervene during the 2024 election campaign itself. The lawyers’ group Lawyers Voice filed a PIL (May 1, 2024) seeking directions for the ECI and Union government to regulate deepfakes in the election. The petition cited the viral video of a Home Minister’s purported speech (later revealed as a deepfake) and argued that unchecked synthetic propaganda “threatens the very foundation of a free and fair election.” The High Court, however, on procedural grounds refused to pass interim orders during the poll process. It noted it “cannot pass such orders… in the middle of elections,” but urged the ECI to deal with the representation expeditiously. Indeed, within days the ECI issued the take-down advisory. While the PIL was ultimately dismissed after voting began, it was significant for law: it framed deepfakes as an electoral integrity issue and publicly highlighted the inadequacy of existing laws.


Recommendations and Way Forward
Given the foregoing, I suggest several measures to safeguard elections and rights in India’s synthetic media era. Key recommendations (building on expert proposals and our analysis) include:


Legislative Reform: Draft a Resolute Deepfake/Synthetic Media Law. A standalone statute would clearly define offenses (e.g., non-consensual creation/distribution of a person’s likeness for misinformation), set proportional penalties, and exempt permissible uses (news reporting, satire). This Act could incorporate aggravated penalties if the deepfake targets a protected group or aims to disturb elections; compensation for victims; and procedural fast-tracks (e.g., inquiry timelines). As SSRN scholars suggest, a Harm-Based approach (tiered by severity) would balance innovation and rights. Pending this, the government’s plan to amend IT Rules is welcome – explicitly equating deepfake production with forgery is a good step – but a law passed by Parliament would carry greater weight and clarity.


Electoral Code Update: The Representation of the People Act should be amended to explicitly mention synthetic media under its false statement’s provisions. The ECI Model Code might also be updated to include a ban on AI-distorted candidate images and voice. Enforcement should include campaign finance penalties for parties or agents violating these rules. In the interim, the 2024 advisory (3-hour takedown rule) should be made permanent and extended to all election stakeholders. Social media platforms working for campaigns must register and be bound by electoral law.


Institutional Mechanisms: The Delhi HC’s directive for a Deepfakes Committee is a prudent move; I recommend this committee follow through rapidly. It should include technologists, election officials, legal experts, and civil society. Outcomes could include standard forensic protocols for courts, model guidelines for law enforcement, and international cooperation (since deepfakes cross borders). Meanwhile, existing bodies like CERT-In, I4C and Election Commission should routinely coordinate. For example, CERT-In’s November 2024 advisory on deepfake threats should be publicized widely among media and police.


Public Awareness and Media Literacy: Finally, democratic resilience requires an informed electorate. Government and NGOs should run campaigns illustrating how to spot deepfakes (drawing on the DAU’s tips) and encourage healthy scepticism of sensational content. Victims of deepfakes (especially public figures) could be offered easier access to legal remedies (perhaps via legal aid or dedicated cyber cells). These reforms – legislative, regulatory, and educational – would make a robust defence-in-depth against deepfakes. We must remember that technology will evolve. Laws should therefore be technology-neutral, focusing on harmful effects and requisite intent, not on any particular AI model. Any crackdown should be carefully worded to avoid stifling legitimate speech. For instance, consent-based AI creations (say, an authorized voice clone by a minister for campaign ads) should remain lawful. The goal is not to police creativity, but to penalize deceit.


Conclusion
In this article I have mapped the contours of India’s deepfake problem and our current legal landscape. The evidence is clear: synthetic media can and has infiltrated electoral discourse, with potentially destabilizing effects. India’s existing laws – the IT Act, IPC offences and election rules – do offer tools to sanction misuse of digital identities, but they are blunt instruments when faced with AI. Courts are trying to fill the void by extending personality and defamation doctrines to cover deepfake cases, and the Election Commission is leveraging its powers to impose quick takedowns. Yet, as scholars and judges have observed, these piecemeal measures leave “significant gaps.”
My proposal is that India would take an aggressive step in embedding safeguards against democratic integrity in the era of AI. It implies enacting unambiguous legislation to limit malicious synthetic media, improving enforcement systems, and preventive education of society. Instead, the second option is to frantically scramble each time there is a new viral deepfake in search of justice. We are able to keep our faith in elections and institutions by takes steps to regulate deepfakes before they happen instead of after the damage is done by prioritizing both innovation and our civil liberties.


FAQs
Q. What is a “deepfake”? A deepfake is synthetic audio, video or image content generated by AI (often using neural networks) that mimics a real person’s likeness. Unlike simple edits, deepfakes can make it seem that someone said or did something they didn’t.
Q. How do deepfakes affect elections? They can spread misinformation: for example, by showing fake endorsements, inciting communal anger, or posing as candidates. This misleads voters and undermines informed consent. In India’s recent polls, examples included AI videos of deceased leaders and altered speeches by public figures.
Q. Is making or sharing a deepfake illegal in India? Currently, India has no specific anti-deepfake law. But deepfakes can fall under existing offences (like forgery, fraud, or defamation). For instance, helping to circulate a deepfake that harms someone’s reputation could lead to charges under IPC sections such as 463/469 (forgery) or IT Act provisions for personation. Election laws can also apply if a deepfake is a false campaign statement. Still, these are indirect remedies and often require court cases.
Q. What is the Election Commission doing? The ECI has issued strict advisories: parties must remove any identified deepfake within three hours and refrain from posting misleading AI content. The EC also expects platforms to block fake accounts. However, the ECI cannot jail offenders – it can only enforce election rules (like invalidating a candidate’s nomination for breach) and work with police.
Q. What can an individual do if they’re a victim? Victims can file police complaints under cybercrime laws (IT Act or IPC). They may also sue for defamation or image rights in court (as Arijit Singh did). Reporting fake content to fact-checkers and the DAU (the Election Commission’s Deepfakes Analysis Unit) can help get it debunked and removed.
Q. How to guard against deepfakes? Stay sceptical of sensational media; verify via reputable news sources. AI-deepfakes often have small telltale flaws (unnatural blinking, lip-sync issues). Public-awareness programs (like the DAU’s WhatsApp tipline) are teaching people to spot fakes. Eventually, legal safeguards and better detection tech will be key.


Sources: (Bluebook 21st edt.)
Statutes and Regulations (India)
Information Technology Act, No. 21 of 2000, INDIA CODE.
Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, Gazette of India, Part II, Sec. 3(i).
Bharatiya Nyaya Sanhita, No. 45 of 2023 (proposed).


Cases
Arijit Singh v. Codible, (Bom. HC 2024) (case number pending or unavailable).
Rohilla v. Union of India, (Del. HC 2024) (case number pending or unavailable).


Academic Articles
Sommya K., Deepfakes, Democracy, and Digital Ethics in India, SSRN (2025), https://papers.ssrn.com/sol3/papers.

Exit mobile version