INDIA’S AI-DRIVEN SCAM EPIDEMIC: ₹70,000 CRORE AT RISK IN 2025


AUTHOR: TRIPTI ROHILLA,  UNIVERSITY SCHOOL OF LAW AND LEGAL STUDIES


TO THE POINT
As India’s digital economy increasing rapidly with advanced technologies, the country finds itself at the epicentre of a new kind of threat-digital and AI-driven scams. The rapid adoption of digital payments, mobile banking and online financial services has made India a global leader in fintech innovation, but it has also exposed millions of users to sophisticated cybercrime. In 2025, the scale and sophistication of AI-enabled scams have reached alarming proportions, with cited losses from deepfake fraud alone estimated at ₹70,000 crore, a figure that underscores the gravity of the crisis and its potential to undermine public trust in the digital ecosystem.
The heart of India’s AI-driven scam epidemic lies in the convergence of technological innovation and criminal ingenuity. The proliferation of digital payment platforms like UPI has revolutionized how Indians transact by totally replacing the in-hand cash payments, with over 131 billion transactions worth ₹20, 00,000 crore processed in FY2024 alone. Yet, this digital leap has brought with it a parallel surge in fraud. By September 2024, more than 632,000 UPI fraud incidents had been reported, which marks a significant major drawback amounting to ₹485 crore in losses for FY2025’s first half, and the trend shows no sign of abating. AI and deepfake technologies have fundamentally altered the modus operandi of scammers. Earlier, fraudsters relied on crude phishing emails or SMS messages, today’s criminals deploy hyper-realistic deepfake videos, cloned voices, and personalised phishing attacks. Authorised Push Payment (AAP) scams, in which victims are tricked into willingly transferring money, have become rampant, driven by emotional manipulation, fake investment offers, and even threats of “digital arrest”- a new tactic where scammers impersonate law enforcement officials and coerce victims to transfer money under the pretense of avoiding legal trouble. This article points out the origin and outcomes of these types of scams in India by stating the statutory frameworks to tackle the problem.

USE OF LEGAL JARGON
The legal landscape surrounding AI-driven scams in India is evolving rapidly, as authorities scramble to keep pace with the ingenuity of cybercriminals. The Information Technology Act (IT) 2000 serves as the primary legislative framework, with Section 66C addressing identity theft and Section 66D targeting cheating by impersonation using computer resources. These provisions are increasingly involved in cases involving deepfake fraud, where scammers use AI-generated content to impersonate trusted individuals or institutions. The Prevention of Money Laundering Act (PMLA) is also frequently applied, particularly in large-scale scams where illicit proceeds are funnelled through formal banking channels or routed offshore. The Digital Personal Data Protection Act, 2023, has become relevant as well, with its provisions on the unlawful collection and misuse of biometric and personal data being violated in KYC-related scams and identity theft cases.

THE PROOF
The evidence of India’s digital scam epidemic is both quantitative and qualitative, painting a picture of a nation under siege from technologically empowered fraudsters. Forensic data reveals the deepfake related cyber crime cases have grown by 550% since 2019, with nearly 2.4 million fraud incidents recorded in FY24 alone-a fourfold increase from the previous year. The Indian Cybercrime Coordination Centre projects that cyber fraud losses could exceed ₹1.2 lakh crore in the coming year, accounting for a staggering 0.7% of the country’s GDP. A closer look at the tactics employed by scammers reveals the transformative impact of AI. Deepfake technology enables the creation of hyper-realistic videos and audio messages that convincingly mimic public figures, CEOs, or even family members. In one high profile case, a deepfake video of Finance Minister Nirmala Sitharaman was used to promote a fictitious cryptocurrency scheme, leading thousands of investors to part with their savings under the illusion of government endorsement.
The scale of these attacks has been drastically increased. Brand abuse now accounts for nearly one-third of all cybercrime incidents in India, with the banking, retail, and garment sectors bearing a burnt of the losses. In health care and finance, AI-driven phishing campaigns and deepfake-enabled social engineering attacks have been particularly prevalent, exploiting vulnerabilities in supply chains, development resources, and even hardware manufacturing processes to insert malicious code and compromise critical systems.
Perhaps most concerning is the emergence of “Jamtara 2.0,” A term describing the use of deepfake technology to manipulate video KYC processes, impersonate executives, and creates fake digital evidence. With over 11 lakh video KYC calls conducted daily in India, this vulnerability has made both individuals and financial institutions frequent targets for identity theft, fraudulent investments, and money laundering. Despite the alarming scale of the issue, nearly 65% of the cyber incidents involving deepfake remain unreported leaving a massive gap in mitigation effort.

ABSTRACT
The rise of Digital and AI-driven scams in India represents a systematic threat to the nation’s financial resilience and a trust that underpins its digital ecosystem. At the core of this crisis is the weaponization of generative AI, which has enabled scammers to bypass traditional security measures, exploit regulatory gaps and launch attacks at a scale and speed previously unimaginable. The economic impact is already being felt, with losses from deepfake fraud projected to reach ₹70,000 crore in 2025 and the broader costs of cyber crime threatening to a erode confidence in digital platforms and institutions. The challenge is multifaceted. On one hand, AI powered tools have democratised access to sophisticated scam techniques, allowing even low-skilled criminals to deploy deepfakes, voice clones, and personalised phishing attacks. On the other, the rapid evolution of these technologies has outspaced the ability of regulators, law enforcement, and financial institutions to respond effectively. The result is a digital arms race with scammers constantly innovating to stay one step ahead of detection and prevention efforts. Countermeasures are being deployed, from AI-powered fraud detection systems and public awareness campaigns to stricter regulatory frameworks and international cooperation. However, the sheer scale and complexity of the threat demand a comprehensive, multi-stakeholder approach that integrates technological innovation, legal reform, and societal education. Without decisive action India’s digital transformation risk being undermined by the very technologies that promised to drive its economic growth and social progress.

CASE LAWS
The legal response to AI-driven scams in India is still taking shape, but several landmark cases and regulatory actions have begun to set important precedents.
ED v. Dubai Digital Arrest Syndicate (2025 )- The ED invoked the Prevention of Money Laundering Act (PMLA) to prosecute a Dubai-linked syndicate responsible for a ₹1,500 crore digital arrest scam, that targeted nearly a thousand victims across multiple states. This was the first case which marked major application of PMLA provisions to an AI- enabled scam of such scale, resulting in the attachment of assets and the dismantling of key nodes in the criminal network.

SEBI v. Algorithmic Manipulators (2024) – In this case, SEBI took action against algorithmic manipulators who used AI-driven trading bots and fake news propagation to rig stock prices and defraud investors. These cases have led to the imposition of penalties and the tightening of regulations around algorithmic trading and market surveillance. The RBI, in response to a series of AI-enabled loan funds modeled after the infamous, Cox & Kings scam, mandated the adoption of AI-powered transaction monitoring and real-time fraud reporting for all the payment aggregators and banks.


CONCLUSION
India’s digital and AI-driven scam epidemic is a clarion call for urgent action at every level of society. The convergence of technological innovation criminal enterprise has created a threat landscape that is dynamic, complex, and deeply disruptive. The projected losses of ₹70,000 crore in 2025 are not just a financial statistic – that represents shattered lives, eroded trust and a potential setback for India’s ambitions as a digital powerhouse. Addressing this task requires a tripartite response. Legislatively, there is a pressing need to update and strengthen existing laws, expedite the passage of the Digital India Act, and criminalise non-consensual deepfakes and AI- enabled impersonation. Technologically, the deployment of advanced fraud detection systems, behavioural biometrics, and quantum-resistant cryptography must become standard practise across all sectors, with a particular focus on high-risk areas like banking, healthcare, and government services. Societally, digital literacy campaigns must be scaled up to reach vulnerable populations including seniors, women, and rural users, who are disproportionately targeted by scammers. Whistleblower incentives and community-based reporting mechanism can help bridge the gap between incidents and enforcement, ensuring that more cases are brought to light and perpetrators are held accountable.
Ultimately, the battle against AI-driven scams is not just about technology or regulation – it is about safeguarding the trust and resilience of India’s Digital Society. The stakes are high, but with coordinate action, innovation, vigilance, India can turn the tide against cybercrime and realize the full potential of its digital future.

FAQs
Q1. What enables the proliferation of AI-driven scams in India?
The widespread availability of AI tools, low barriers to entry and the rapid digitalization of financial services have created fertile ground for cybercriminals. Many scammers now use “Fraud-as-a-Service” models, purchasing deepfake creation kits, phishing templates, and automated attack scripts on the dark web. The lack of digital literacy among large segments of the population further exacerbates the problem, making it easier for scammers to deceive and manipulate their targets.
Q2. Which sectors are most vulnerable to AI-driven scams?
According to the India Cyber Threat Report 2025, the healthcare and finance sectors are particularly at risk, owing to the sensitive nature of the data they handle and the high value of potential payouts. Banking, retail, and government services are also frequent targets, with brand abuse and identity theft being common tactics. The integration of AI with supply chain vulnerabilities has led to new types of attacks including data poisoning and the insertion of malicious code through compromised hardware and software.
Q3. What global trends mirror India’s experience with AI-driven scams?
India’s challenges are part of a broader global trend with Deloitte forecasting $40 billion in global losses from AI-enabled fraud by 2027. Countries with rapidly digitalizing economies and large population of new internet users are particularly vulnerable. The rise of deepfake technology automated phishing and AI-driven malware is a worldwide phenomenon necessitating international cooperation and knowledge-sharing to develop effective countermeasures.
Q4. What are the most effective countermeasures against AI-driven scams?
A multi-pronged approach is essential. Regulatory bodies must enforce stricter reporting and compliance standards, while financial institutions should invest in AI-powered fraud detection real-time monitoring systems. Public awareness campaigns and digital literacy initiatives are crucial for empowering users to recognise and avoid scams. Collaboration between government, industry, and civil society, will be key to staying ahead of evolving threats and restoring trust in India’s digital financial ecosystem.

Leave a Reply

Your email address will not be published. Required fields are marked *