Corporate India in the Age of AI: Opportunities, Risks, and the Rise of Intelligence Crime



Author: Hemant Tiwari, IME Law College, Sahibabad, Ghaziabad


Abstract


The rapid diffusion of AI into Indian corporations promises increased efficiency, innovation, and commercial advantage. Yet, the surge in AI-driven business activity has also created complex challenges in areas such as privacy protection, intellectual property, liability, and the escalation of intelligence crime. The AI Governance Guidelines 2025 published by MeitY go toward giving India its first comprehensive framework for responsible corporate AI-balancing innovation with constitutional rights, commercial interests, and the emerging threat of AI-enabled criminality. This article reviews some of the key opportunities, risks, salient statutory safeguards, core proof via recent case law, and practical steps for compliance in corporate India.

Opportunities for Corporate India


AI equips Indian businesses with tools for:
Improved Automation: Routine operations—such as document review, HR, or customer support—are now digitized via intelligent agents, freeing human capital for higher-value tasks.
The use of legal AI by in-house counsel includes managing due diligence, reviewing contracts, and conducting compliance audits, ensuring faster litigation management with minimum human error.

Risks and Emerging Intelligence Crime


Its adoption is not without risk; the misuse of AI is already generating new and sophisticated forms of “intelligence crime.” Major risks include:
Data Privacy and Ethical Liability: AI requires vast datasets, which often include sensitive or PII data. Mishandling, unauthorized sharing, or opaque algorithmic processing may amount to breach of privacy rights under the Digital Personal Data Protection Act (DPDP), 2023.
Deepfakes and Cybercrime: Impersonation by using a computer resource under Section 66D of the IT Act is increasingly being invoked in cases involving AI-generated deepfakes, identity theft, and online fraud.

The Current Statutory Framework


Corporate India now has to function within a rich legal architecture anchored by:
Information Technology Act, 2000: It covers all electronic records, privacy, cyber-security, and also comprises Section 66D for impersonation, Section 43A for data protection, and Section 79 regarding conditional platform liability.
DPDP Act, 2023: Anchors robust consent and data protection obligations for all personal data processed by AI systems. It also brings punitive measures for non-compliance and mandates grievance re-dressal.
Copyright Act, 1957: Regulates the use of protected works during the training of AI models; present defense is “fair dealing” under Section 52, but highly ambiguous for AI-created content.
Consumer Protection Act, 2019: This applies in cases involving misleading or biased AI recommendations, especially in consumer-facing industries.
Sectoral Laws: The sectoral regulators for critical domains like banking (RBI), securities (SEBI), or telecoms independently, through TRAI, issue AI-application compliance standards, including essential audit trails and explainability for algorithms.

Case Laws

KMG Wires Pvt Ltd v. Income Tax Authorities (Bombay High Court, 2025)
Facts: The Income Tax Department passed an assessment order against KMG Wires Pvt Ltd, relying on various case law precedents to support the findings. These citations were generated using artificial intelligence tools. On scrutiny, it was found that the cited case laws were absolutely fictitious and did not exist in any database of cases. In other words, the AI had “hallucinated” these references, but these formed the basis for the tax assessment.
Legal Issue: Whether an assessment order could be sustained which had relied upon non-existent, AI-generated case laws; and what duty tax authorities owe to verify legal precedents.
Judgment: Bombay High Court set aside the assessment order, holding that reliance on fictitious case laws is fatal to any quasi-judicial order. The Court held that the duty of the authorities is to independently verify all legal citations before incorporating those into an order.
The Court issued an advisory cautioning all authorities to exercise diligence when using AI tools, cross-check outputs against authentic legal databases, and avoid blind reliance on AI-generated legal research. This ruling warns of the weaknesses that exist in AI and points out the necessity for human supervision during legal procedures.

ANI v. OpenAI (Delhi High Court, 2025 – Pending)
Facts: ANI Media filed a copyright infringement lawsuit against OpenAI, accusing the company of using ANI’s news content to train ChatGPT without any authorization. The IP Press. It filed the case in November 2024, and Open AI blocklisted ANI’s domain in October 2024 Tech Policy Press.
Legal Issues: The Delhi High Court is considering four major issues: whether OpenAI’s storage of ANI’s content for training infringes copyright; whether using copyrighted content to generate responses amounts to infringement; whether OpenAI can claim ‘fair use’ under Section 52 of the Copyright Act, 1957; and whether Indian courts have jurisdiction given OpenAI’s US-based servers LawChakra.
Current Status: The case is still pending and the hearings are still ongoing as of August 2025 LawBeat. There are two amici curiae, wherein Professor Arul George Scaria contends that temporary storage for learning purposes is permitted and the Delhi High Court has jurisdiction Live Law. The judgment would have a long-term effect on copyright law, the regulation of AI, and digital news publishers in India, probably setting a global precedent.

Bengaluru IT Tribunal: AI-driven assessment error (2025)
Facts: The Income Tax Appellate Tribunal in Bengaluru encountered an assessment order issued in 2025 that contained several errors, which were caused by the use of artificial intelligence.
The assessing officer had placed reliance on AI-generated outputs without verifying their accuracy, leading to flawed tax assessments.
Legal Issues: The core question raised was whether AI-generated assessment orders are legally valid without the involvement of human verification.
The tribunal also considered the level of judicial scrutiny needed for administrative decisions aided by AI, the accountability for errors in automated tax assessments, and the principles of natural justice in AI-driven proceedings.
Judgment: The tribunal decided to reassign the case, emphasizing that human judgment is essential in quasi-judicial roles and cannot be replaced by AI tools.
It clarified that any output from AI systems must be independently verified by an assessing officer before being used in official orders. The decision reflects judicial caution against the unrestricted use of AI and underscores the importance of technology supporting, rather than replacing, reasoned decisions. This reinforces the need for human oversight in algorithmic decisions to ensure fairness, accuracy, and due process in tax assessments.
India’s New Regulatory Philosophy
The 2025 Guidelines introduce a “techno-legal” model that emphasizes:
Systemic Compliance-by-Design: Embedding compliance mechanisms such as consent, data provenance, and watermarking directly into the system architecture, rather than relying on enforcement after the fact.
Transparency and Auditability: Companies must record data provenance, label AI-generated content, and allow audits or third-party assessments, especially for high-risk or consumer-facing AI systems.
Human-in-the-Loop Oversight: Mandating human review for critical areas such as healthcare, legal advice, and regulatory filings. Internationally, India’s position lies between Europe’s rights-based, law-driven approach and the U.S.’s market-oriented, minimalistic stance. It seeks a balance between innovation and ethics through practical and pragmatic measures.

The Rise of Intelligence Crime
The convergence of AI and cybercrime has given rise to a new category of offenses known as “intelligence crime.” Examples include:
Synthetic Identity Fraud: AI tools generate fake identities at scale, bypassing traditional KYC protocols.
Market Manipulation: AI tools predict and influence stock prices by manipulating digital trends.
Automated Phishing Attacks: Bots analyze personal data and language patterns to carry out more convincing scams.
These offenses are being addressed under the IT Act, Consumer Protection Act, and pending amendments to the DPDP Act.
However, Indian law still lags behind the evolving criminal tactics enabled by AI.

Compliance and Risk Mitigation
What should Indian corporations do?
Appoint AI Ethics/Compliance Officers: Assign dedicated personnel for ongoing evaluation of algorithmic risks and legal compliance.
Document Data Provenance: Ensure all datasets used are lawfully acquired, consented to, and tracked for audits.
Establish Regulatory Sandboxes: Test AI models in controlled environments to promote innovation while enabling regulatory oversight.
Implement Legal Review and Human Validation: Reduce the risk of “AI hallucinations” in official records, legal filings, or compliance reports by including human verification at key decision points.
Conduct Periodic Legal Reviews: Stay updated with policymakers’ decisions and court rulings, adjusting company policies as legal standards evolve.

Conclusion


The potential of AI for corporate India is immense, but so are the associated risks.
As intelligence crime grows and regulatory scrutiny increases, compliance-by-design, responsible innovation, and continuous legal vigilance have become essential for business continuity and ethical leadership. Companies that invest early in strong governance are best positioned to succeed in the age of intelligent enterprise.

FAQS


1. What is “intelligence crime” in the Indian context?
Intelligence crime refers to the misuse of AI for criminal purposes, including deepfakes, synthetic identity fraud, market manipulation, or cyberstalking, using advanced algorithms or generative models.


2. Do the 2025 AI Governance Guidelines carry legal force?
The guidelines are currently non-binding, but they set industry expectations and are likely to be formalized in future AI or IT-specific legislation.


3. How do Indian laws protect consumers against biased or unfair AI decisions?
The Consumer Protection Act, DPDP Act, and IT Act require due diligence, redressal mechanisms, and auditability of AI outputs when they affect consumer rights or sensitive data.


4. What steps are necessary to ensure compliance by a corporate legal team?
Legal teams should establish audit trails, organize regular training, review datasets for lawful acquisition, appoint compliance officers, validate AI outputs, and stay updated on emerging legal standards.


5. What is a landmark recent case related to AI misuse in court?
In 2025, the Bombay High Court overturned an assessment order that relied on fabricated case law generated through AI, reaffirming the need for human judicial oversight in legal proceedings.

Leave a Reply

Your email address will not be published. Required fields are marked *