Author: Divay Nair, JECRC University
Abstract
For the first time, SEBI has taken a bold and much-needed step to bring clear rules around how AI is used in the financial world.The new guidelines (January–February 2025) hold intermediaries including Investment Advisers, Research Analysts, and market infrastructure players accountable for how they use AI in their day-to-day operations.
With AI’s increasing role in stock predictions, algorithmic trading, and robo-advisory, this regulation marks a turning point in India’s approach to fintech governance. This article explores the scope of these new rules, their implications on legal liability, data protection, and algorithmic transparency, supported by relevant laws, expert opinions, and case developments.
AI in finance is no longer optional it’s everywhere. But with great power comes great responsibility. SEBI’s framework reminds financial players that tech can’t become an excuse to escape liability.
To the Point
SEBI’s 2025 guidelines make intermediaries fully responsible for AI decisions, disclosures, and handling client complaints. The regulation aims to address the increasing dependency on machine learning in securities advice and prevent harm from opaque, unregulated AI operations.
If an AI tool gives bad advice or leads to unexpected losses, it’s not the software that gets blamed it’s the person or company that chose to use it. These rules bring clarity in an area that was, until now, legally grey.
This also encourages firms to use AI more carefully, balancing innovation with user safety. Transparency is no longer just good practice it’s a legal obligation.
Use of Legal Jargon
The SEBI (Investment Advisers) Regulations, 2013, require even technology-driven services to uphold fiduciary duties. This means that if AI tools are used and they fail to protect sensitive personal data, there could be legal consequences under Section 43A of the Information Technology Act, 2000.
SEBI’s 2025 expansion of its Cyber Resilience Framework also includes mandatory AI audit logs. This means that all AI decisions must be traceable and justifiable. Sandbox testing ensures AI tools meet compliance before real-time use.
These developments introduce key legal doctrines like:
Algorithmic Accountability – The obligation to ensure that any automated decision-making is fair, explainable, and ethical.
Constructive Liability – Even if an individual did not directly cause harm, they can be held responsible for allowing flawed systems to operate.
SEBI is aligning with global legal trends. It takes cues from the EU’s AI Act and developments in the U.S., including SEC investigations into biased AI predictions. India is clearly aiming to lead Asia in responsible AI in finance.
These legal doctrines are no longer theoretical they are becoming active tools of enforcement. Intermediaries can no longer blame the “black box” of AI for poor outcomes.
The Proof
SEBI v. Karvy Stock Broking Ltd. (2020)
Reaffirmed that intermediaries must maintain fiduciary responsibility over all tools and systems handling investor accounts a principle now extended to AI.
FinVerse Robo-Advisory Show-Cause (2025)
FinVerse, an AI-based wealth firm, was served notice after its auto-allocation system recommended high-risk derivatives to senior citizens, causing massive retail losses. The case is expected to be SEBI’s first under its AI-specific guidelines.
RBI v. Internet & Mobile Association of India (2020)
The Supreme Court ruled that financial innovation must be balanced with systemic safeguards. The same rationale now supports SEBI’s call for AI regulation.
Upcoming Cases & Trends
Legal insiders expect a wave of similar show-cause notices in 2025–2026. Some AI-driven broker platforms have already paused new rollouts, awaiting compliance updates. This reflects the seriousness of the new regime.
Challenges in Implementation
Hard to Understand: Many AI tools work like black boxes, making it tough to explain or audit their decisions.
Data Rules: Some AI platforms store data overseas, which may not fit India’s data privacy laws.
High Costs: Smaller firms may find it expensive to meet all the new rules.
Who’s Responsible?: It’s often unclear whether the blame lies with the developer, the user, or the legal team.
Law Can’t Keep Up: Technology is moving faster than the laws meant to control it.
AI has huge potential, but it also brings new risks.
One big issue is training many users rely on AI without really understanding how it works. That can easily lead to mistakes.
Expert Views
Ashwin Mehta, Tech Law Counsel:
“A welcome move, but SEBI must keep enforcement proportionate and innovation-friendly.”
Dr. Reena Sagar, AI Ethics Scholar:
“Explainability and AI traceability should be non-negotiable for market-facing tools.”
Anjali Rao, Former SEBI Officer:
“The sandbox approach is ideal. It ensures regulatory checks without hampering creativity.”
Rajeev Thakur, Compliance Head, FinTech firm:
“These rules will push firms to rethink how they build and test AI systems. It’s a chance to rebuild trust.”
Experts broadly agree that this regulation was overdue. But it must be implemented wisely, with support for smaller firms and clarity in enforcement.
What Can Be Done Better?
Introduce certified third-party audits to ensure AI tools are safe, fair, and reliable.
Build investor education modules around AI-based advice.
Coordinate with RBI, IRDAI for inter-sectoral AI policy uniformity.
Create AI systems for financial services that can clearly explain how they make decisions, so people can understand and trust them.
Offer regulatory sandboxes to startups, not just big firms, so innovation stays inclusive.
Encourage interdisciplinary hiring firms should hire people who understand both finance and AI.
SEBI could also consider publishing anonymized case studies to educate the industry on what to avoid.
Conclusion
The introduction of SEBI’s AI accountability regime is both timely and visionary. It not only aligns India with global best practices but also provides safeguards in an increasingly tech-led financial world.
However, the true success of this framework will depend on its interpretation in enforcement and court rulings. The balance between innovation and regulation must remain dynamic, not restrictive.
This gives India a chance to set an example for using AI responsibly in finance. The challenge now is execution fair, firm, and future-ready.
FAQs
Q1. Does SEBI’s guideline apply to all AI-based tools used by intermediaries?
Yes. Any AI used for trading, advice, research, or investor interaction falls under these rules.
Q2. Can financial advisors avoid responsibility if their advice comes from AI?
No. Intermediaries remain fully liable even if the service was generated by an AI tool.
Q3. Is prior approval needed before deploying an AI tool?
Yes. AI tools need to go through SEBI’s sandbox testing first, to make sure they’re safe and reliable before being used in real markets.
Q4. What happens if the AI tool causes a data breach or financial loss?
The intermediary is liable under SEBI regulations, the IT Act, and potentially civil or criminal laws depending on the breach.
Q5. Can international AI tools be used without changes?
No. Tools must comply with India-specific rules on data privacy, explainability, and auditability before deployment.
