Author: Anshuli Singh student at Bharati Vidyapeeth, Pune
To the Point
Artificial Intelligence has begun reshaping the very anatomy of crime. No longer confined to traditional phishing emails or brute-force hacking, cybercriminals today deploy AI-powered tools that mimic voices, generate fake videos, and construct realistic identities all without lifting a finger. In this shifting landscape, AI emerges as both a potent weapon and a promising defence mechanism.
This article dissects the double-edged impact of AI, how it empowers cybercriminals with new-age weapons like deepfakes and algorithmic phishing, while simultaneously offering law enforcement unprecedented tools for digital investigation. India’s legal framework, still largely guided by the IT Act, 2008 and the Bhartiya Sakshya Adhiniyam 2023, is showing signs of strain. While reform is on the horizon, the current gap between innovation and regulation risks leaving both citizens and institutions exposed. Bridging this gap isn’t just a legislative exercise – it’s a constitutional imperative.
Use of Legal Jargon
Artificial Intelligence is not just a technological disruptor it is fast becoming a modus operandi for digital offences. The mens rea (criminal intent) traditionally central to cybercrime jurisprudence becomes blurred when offences are executed autonomously by algorithms or AI models. This challenges the attribution of liability, especially in cases where no direct human command is traceable.
In AI-enabled crimes such as deepfakes, perpetrators use neural networks to fabricate highly convincing audio-visual content, shaking the foundation of evidentiary credibility and shifting the burden of proof disproportionately onto victims. Similarly, AI-generated phishing campaigns are customised based on user behaviour, bypassing conventional spam detection and rendering provisions under Section 43 and 66C of the IT Act insufficient.
Compounding these challenges are jurisdictional hurdles. AI-driven offences often originate outside Indian territory or are routed through decentralised networks, cloud servers, and VPNs. This creates ambiguity over territorial jurisdiction, particularly under Section 75 of the IT Act, which seeks to extend the Act’s applicability to offences committed outside India by any person, if the system affected is located within India. However, enforcement remains complex due to the absence of streamlined mutual legal assistance treaties (MLATs), delays in digital evidence requisition, and lack of clarity on data sovereignty in transnational cybercrimes.
Furthermore, as AI systems evolve autonomously, questions arise on whether Indian courts can assert personal or subject-matter jurisdiction over algorithmic actions without identifiable human actors. This highlights the urgent need for legislative guidance on cross-border legal enforcement in AI crimes, a domain currently left to judicial interpretation on a case-by-case basis.
The evolving AI-cybercrime ecosystem therefore demands a recalibration of established doctrines in criminal law, evidence law, and constitutional protections, especially under Article 21 (right to life and personal liberty) and Article 14 (right to equality before law).
The Proof
Deepfakes in India: In 2023, a fake video of a regional politician speaking inflammatory remarks went viral, causing public unrest before being debunked. No formal legal provision exists in India to criminalise the mere creation of deepfakes unless they fall under defamation or obscenity.
AI Phishing: According to the Indian Computer Emergency Response Team (CERT-In), India witnessed over 1.4 million cyber incidents in 2023, with phishing attacks now increasingly leveraging AI to craft tailored lures based on public data and online behaviour.
Identity Theft via AI: A fintech startup in Mumbai reported losses after scammers used AI-generated Aadhaar replicas and forged KYC documents to open fraudulent accounts. Current safeguards under the Aadhaar Act and Section 66C of the IT Act are reactive and weakly enforced.
Law Enforcement Use: Indian police departments in states like Telangana and Maharashtra have adopted facial recognition and AI-based anomaly detection systems for predictive policing. While effective, they raise questions about privacy, legality, and algorithmic bias.
Legal Vacuum: The IT Act, 2000 is silent on AI, deepfakes, and algorithmic manipulation. The transition from the IT Act to a more contemporary framework under the Digital India Act remains a work in progress.
Abstract
The integration of Artificial Intelligence into cyberspace has redefined both the nature of crime and the mechanisms of its investigation. In India, the rise of AI-driven cyber threats such as deepfakes, intelligent phishing, and synthetic identity fraud is challenging conventional legal boundaries. Simultaneously, law enforcement agencies are beginning to deploy AI tools for digital forensics and predictive threat analysis, leading to new forms of evidence and enforcement models.
It highlights how evolving technologies are exposing blind spots in India’s legal and regulatory apparatus, particularly in the handling of AI-generated content, digital evidence, and data jurisdiction. The discussion navigates real-world applications and judicial perspectives to underscore the urgent need for forward-thinking reforms that uphold constitutional protections while effectively countering emerging digital threats.
Case Laws
Shreya Singhal v. Union of India (2015)
This landmark judgment struck down Section 66A of the IT Act, recognising the need to preserve freedom of speech and expression in the digital realm. While not directly about AI, it laid the foundation for how constitutional rights apply online, especially in cases involving manipulated or AI-generated content.
Justice K.S. Puttaswamy v. Union of India (2017)
Privacy was affirmed as a fundamental right by the Supreme Court under the scope of Article 21. This judgment becomes highly relevant in the context of AI-driven surveillance, facial recognition, and predictive policing tools where personal data and biometric profiles are often processed without consent.
Anvar P.V. v. P.K. Basheer (2014)
This case clarified the admissibility of electronic evidence under Section 65B of the Indian Evidence Act. It becomes particularly important when law enforcement uses AI-generated forensic outputs or seeks to submit deepfake analysis as evidence in court.
State of Maharashtra v. Praful Desai (2003)
The Court allowed video conferencing as valid for evidence collection and trial procedures. The judgment reflects the judiciary’s openness to technology, which may pave the way for eventual inclusion of AI-generated material, provided evidentiary safeguards are ensured.
Conclusion
As artificial intelligence continues to redefine the contours of cybercrime, India stands at a legal crossroads. On one hand, AI has empowered cybercriminals with tools capable of impersonation, manipulation, and automation at an unprecedented scale. On the other, it has armed law enforcement with the potential to investigate smarter, act faster, and predict better. This duality makes AI not just a disruptive force but a defining one.
What remains clear is that the legal architecture must evolve at the same pace as the technology it seeks to regulate. Reliance on outdated legislation such as the IT Act, 2000, and an under-equipped enforcement framework cannot hold against the scale and sophistication of AI-enabled offences. The law must not only respond to crime but also anticipate it.
A multi-pronged reform strategy is essential: statutory updates to reflect AI-specific challenges, standardised guidelines for the use of AI in policing and forensics, cross-border cooperation mechanisms, and judicial capacity-building. Equally important is ensuring that this transformation upholds core constitutional values: privacy, due process, fairness, and accountability.
India does not lack talent, technology, or intent. What it needs now is a coherent legal response that aligns digital innovation with democratic safeguards. The future of cyberlaw depends not just on catching up but staying ahead.
FAQs (Frequently Asked Questions)
Q1.Which legal provisions currently apply to AI-driven cyber offences in India?
There is no dedicated law for AI-related offences. Most crimes are dealt with under the Information Technology Act, 2000, along with applicable provisions from the Bhartiya Nyay Sanhita 2023. However, these laws were not designed with AI in mind and often fall short in addressing emerging threats like deepfakes and algorithmic phishing.
Q2. Are deepfakes illegal in India?
Deepfakes are not explicitly criminalised under Indian law. However, depending on context, they may attract charges under laws related to defamation, obscenity (Section 67 IT Act), impersonation, or cyberstalking. The lack of a dedicated provision leaves significant legal grey areas.
Q3. Can AI-generated evidence be used in Indian courts?
However, its admissibility is governed by the conditions outlined in Section 63 of the Bhartiya Sakshya Adhiniyam, 2023, which pertains to electronic evidence.
The key challenge lies in establishing authenticity, chain of custody, and expert validation when the evidence is AI-generated or processed.
Q4. How do jurisdiction issues complicate AI-related cybercrime cases?
AI crimes often involve cross-border elements data hosted overseas, bots operating globally, or victims in multiple jurisdictions. This creates enforcement hurdles, especially in the absence of swift mutual legal assistance treaties (MLATs) or real-time cooperation mechanisms. Section 75 of the IT Act provides limited extraterritorial reach but is hard to implement without international alignment.
