Author: Mahak Jain, UPES
To the Point
AI-driven trading systems are increasingly capable of anticipating market trends with near-human intuition, yet without access to material non-public information (MNPI) or human intent. These AI-based programs can process vast amounts of public data, such as news, stock trends, and social media sentiment, to predict future price movements of securities with remarkable accuracy. However, these systems operate without any human input once trained, and importantly, without direct access to Material Non-Public Information (MNPI)—a key requirement for insider trading under securities law.
Traditionally, insider trading occurs when a person who has privileged, confidential information about a company uses that information to trade securities for personal gain or to avoid a loss. For a violation to occur, two essential legal elements must be established:
Possession or access to MNPI or UPSI
Breach of a fiduciary duty or misuse of that information for trading
In the case of AI, however, these conditions become blurred. AI is not a “person” in the legal sense and does not have intent (mens rea) or fiduciary duties. It does not “possess” inside information in the traditional sense, nor can it be said to misuse it knowingly. This leads to a significant regulatory grey area. The core issue for legal systems and regulators, such as SEBI in India or the SEC in the United States, is whether trading outcomes that resemble insider trading, because of the AI’s predictive power that should be treated the same as traditional insider trading, even when there is no actual breach of confidentiality or human wrongdoing involved.
Thus, the key legal questions that arise are:
Can AI’s use of advanced data analytics be equated to possessing or acting upon insider information?
Should we expand the legal definition of “insider” or “information misuse” to include AI-generated predictions?
Is there a need to reform insider trading laws to address the evolving role of autonomous decision-making in financial markets?
As AI becomes more autonomous and powerful, it is becoming increasingly important for lawmakers to re-evaluate existing securities regulations to ensure that the principles of market fairness, information equality, and investor protection continue to be upheld in this new digital trading environment.
Use of Legal Jargon
Material Non-Public Information (MNPI) or Unpublished Price Sensitive Information (UPSI):
Refers to any confidential or unpublished information related to a company or its securities, which, if made public, would likely influence an investor’s decision or impact the company’s stock price. Trading based on such information constitutes a breach of securities law.
Possession Standard and Use Standard:
These are two legal benchmarks used to assess insider trading liability.
Under the Possession Standard, merely holding MNPI at the time of trading may trigger liability.
The Use Standard requires that the trader actively use the MNPI to make the trade. Different jurisdictions apply different tests based on their regulatory approach.
Mens Rea:
A core component of criminal liability refers to the mental state or intent of the person committing the offence. In insider trading, it requires knowledge or willful disregard of the illegal use of confidential information.
Fiduciary Duty:
A legal obligation imposed on certain individuals (like directors, officers, or employees) to act in good faith and the best interests of the company. Trading on MNPI by someone who owes such a duty is considered a violation of this obligation.
Tippee-Tipper Liability:
A legal concept where an insider (the tipper) passes MNPI to another person (the tippee), who then trades on that information. Both the tipper and tippee can be held liable if the tipper breached a fiduciary duty and the tippee knew or should have known of the breach.
Algorithmic Trading:
Algorithmic trading involves the use of automated computer programs to execute buy or sell orders in the financial markets based on pre-defined instructions, such as timing, price, volume, or real-time market conditions. Although this form of trading is lawful and widely used, it can trigger regulatory scrutiny if the algorithms are designed to manipulate market behavior, exploit loopholes, or create an uneven playing field for other participants.
Front-Running:
Front-running refers to an unethical and illegal trading activity where a broker or trader uses prior knowledge of impending client transactions to place personal trades in the same security ahead of the client’s order. This is done to benefit from the expected price movement caused by the larger client transaction, thereby gaining an unfair advantage in the market.
The Proof (Relevant Facts, References, or Authorities)
Insider Trading Laws Are Centered on Human Misconduct:
Under Indian law, Section 12A of the Securities and Exchange Board of India (SEBI) Act, 1992, and Regulation 3 of the SEBI (Prohibition of Insider Trading) Regulations, 2015 prohibit any person from trading in securities while in possession of unpublished price-sensitive information (UPSI). These provisions presume the involvement of a natural person or legal entity capable of accessing confidential information and acting with intent. Artificial Intelligence (AI), however, lacks both the legal personhood and mental intent (mens rea) required to constitute such misconduct.
AI Operates Solely on Public Data and Mathematical Predictions:
Predictive models used in algorithmic or AI-driven trading rely on vast amounts of publicly available data, such as earnings reports, macroeconomic indicators, market sentiment, and social media trends. These systems detect patterns and correlations without accessing MNPI. Therefore, even if their forecasts lead to profitable trades, the absence of confidential input data places them outside the traditional definition of insider trading.
No Fiduciary Duty or Breach Involved:
A foundational requirement in insider trading cases is the existence of a fiduciary duty—a legal obligation to act in the interest of another party, typically seen in corporate insiders. The breach of this duty, followed by trading based on UPSI, creates liability. Since AI systems cannot form fiduciary relationships or breach duties of confidentiality, they do not meet this legal threshold. Courts have consistently emphasized this element, as seen in Dirks v. SEC (1983), where liability depended on a breach of fiduciary duty and receipt of an improper benefit.
SEBI’s Regulatory Silence on Autonomous Trading:
While SEBI has introduced risk management frameworks for algorithmic and high-frequency trading, there is no current regulation explicitly addressing AI-generated trading decisions where no human intervention exists at the point of execution. This creates a regulatory vacuum. In contrast, the U.S. SEC has acknowledged the risks of AI in trading but has also not redefined insider trading to account for autonomous behavior.
International Authorities Acknowledge the Gap:
In the United States, Rule 10b-5 under the Securities Exchange Act of 1934 remains the principal anti-fraud provision. It requires proof of deception or fraud, neither of which can be directly attributed to an AI system acting on its own.
In India, Regulation 2(g) of SEBI (PIT) Regulations defines “insider” as someone who is either connected with the company or in possession of UPSI. This inherently excludes autonomous machines that lack human relationships or agency.
Absence of Precedent on AI-Led Insider Trading:
To date, no Indian or international court has adjudicated a case where AI-based prediction systems were held liable for insider trading. The absence of judicial interpretation on this emerging issue reflects a growing need for legal reform or interpretive clarity.
Abstract (Concise Summary)
The use of Artificial Intelligence in securities trading has introduced a novel challenge for regulators. AI systems can process vast amounts of public data to make highly accurate market predictions, achieving outcomes similar to those of insider trading, yet without breaching existing legal provisions. This is because most insider trading laws hinge on human conduct, intent, or access to confidential information, none of which apply to autonomous AI models. As a result, these trades often fall outside the scope of regulatory enforcement. This article explores the legal uncertainty surrounding AI-driven trading, highlights the limitations of current insider trading frameworks, and considers potential reforms to maintain market integrity and ensure equitable access for all investors.
Case Laws:
Dirks v. SEC (1983, U.S. Supreme Court)
In this landmark case, the U.S. Supreme Court held that a person who receives confidential information (a “tippee”) is only liable for insider trading if the original insider (the “tipper”) breached a fiduciary duty and received some personal benefit, and the tippee knew or should have known about that breach.
This case established that intent and fiduciary breach are necessary for insider trading liability. Since AI cannot form fiduciary relationships or receive “tips,” this doctrine currently excludes AI systems from liability.
SEC v. Raj Rajaratnam (2011, U.S. District Court)
Rajaratnam, a hedge fund manager, was convicted of insider trading after obtaining and using MNPI about publicly traded companies. The court emphasized electronic surveillance and wiretaps to prove he knowingly traded on confidential information.
This case reaffirmed the need for human knowledge, intent, and personal relationships in establishing insider trading. An AI system, acting without human intent or communication, would not fit this framework under current law.
SEBI v. Kanaiyalal Baldevbhai Patel (2015) Supreme Court of India [(2017) 15 SCC 1]
The respondent traded in shares just before the public announcement of a merger involving the company. SEBI alleged that he had access to UPSI. The Court upheld SEBI’s findings, emphasizing that circumstantial evidence can be enough to establish insider trading even without direct proof of information sharing.
This case shows how Indian courts expand liability through inference. However, even such circumstantial logic cannot yet be applied to AI systems, as they lack any human connections or communication that can be legally inferred.
Chandrakala v. SEBI (2011) Securities Appellate Tribunal (SAT) [2011 SCC OnLine SAT 9]
Chandrakala was found to have traded in the shares of a company shortly before a major announcement. Her trading pattern aligned with someone having access to UPSI. The SAT upheld SEBI’s order based on unusual trading patterns, even though no direct evidence of a tip-off was presented.
This case illustrates SEBI’s reliance on trading behavior and timing. In AI-driven trading, such patterns may arise naturally through data analytics, complicating how regulators distinguish illegal behavior from smart automation.
Manmohan Shetty v. SEBI (2022) (Securities Appellate Tribunal (SAT)) [2022 SCC OnLine SAT 30]
SEBI accused Manmohan Shetty and his daughter of insider trading based on their access to board-level information and subsequent trading before a merger. However, the SAT ruled in their favor due to a lack of conclusive proof of information misuse. This case reflects the judiciary’s emphasis on evidence of information misuse, not just access. For AI, which uses only public data, the current test would not apply, even if the outcome mimics insider-like trades.
All current Indian jurisprudence on insider trading is human-centric, requiring access to UPSI, intent, fiduciary duty, or circumstantial behavior implying human knowledge. AI does not satisfy these elements under Indian law, highlighting a regulatory vacuum that must be addressed as algorithmic trading becomes more autonomous and predictive.
Conclusion
The emergence of AI in financial markets presents a fundamental challenge to the traditional, human-centered framework of insider trading laws. When algorithms can generate predictions as accurate as those made using non-public information, without any unlawful access or fiduciary breach, it exposes a significant regulatory blind spot. Existing statutes, which rely heavily on human intent and relationships, may no longer be sufficient.
Going forward, regulators must consider a paradigm shift. Instead of focusing solely on the source of the information or the intent behind the trade, legal frameworks should begin emphasizing market outcomes and equal access to data. This may involve assigning responsibility to the designers or operators of AI systems under a model of proxy liability, or crafting new norms for digital trading that emphasize informational symmetry and algorithmic accountability. A forward-looking and inclusive regulatory response is crucial to ensure that the principles of fairness and transparency remain intact in the age of autonomous finance.
FAQS
Q1. Can AI commit insider trading under current Indian laws?
A: Not directly. Indian laws (like SEBI (PIT) Regulations, 2015) focus on individuals or connected persons misusing MNPI. AI has no legal personhood or mens rea.
Q2. Are there any countries that regulate AI in financial trading explicitly?
A: No country has yet enacted laws specifically addressing AI-driven insider trading, though agencies like the SEC and ESMA have acknowledged the risks.
Q3. If an AI predicts a stock price drop and trades accordingly, is it illegal?
A: Not necessarily. If the prediction is based on public or permissible data, it’s not a breach. The problem arises if the algorithm uses data it wasn’t legally allowed to access.
Q4. Who is liable if AI breaks trading regulations?
A: Typically, the liability would fall on the developer, data provider, or firm that deployed the algorithm, depending on the nature of the breach.
Q5. Should insider trading laws be amended for AI?
A: Possibly yes. Many scholars advocate for expanding the legal definition of “person” or creating AI-specific fiduciary standards in securities law.
References:
https://indiankanoon.org/doc/182502833/
https://indiankanoon.org/doc/46714289/
https://www.casemine.com/judgement/in/5b0532029eff433df93b2cec
https://www.sec.gov/news/press/2011/2011-233.htm#:~:text=Separately%2C%20on%20Oct.,monetary%20relief%20ordered%20against%20Rajaratnam.
https://caselaw.findlaw.com/court/us-supreme-court/463/646.html#:~:text=%22In%20tipping%20potential%20traders%2C%20Dirks,duty%20as%20%5Bthe%5D%20insiders.
https://www.fsb.org/2017/11/artificial-intelligence-and-machine-learning-in-financial-service/
Key Legal References and Authorities:
SEBI Act, 1992, Section 12A – Prohibition of Insider Trading
SEBI (Prohibition of Insider Trading) Regulations, 2015, Regulations 2(g), 3, 4
Securities Exchange Act of 1934 (USA) – Rule 10b-5
Dirks v. SEC, 463 U.S. 646 (1983) – Clarified fiduciary duty and tippee liability.
SEC v. Rajaratnam, 802 F. Supp. 2d 491 (S.D.N.Y. 2011) – Importance of willful intent in insider trading
SEBI Discussion Paper on Algorithmic Trading and Co-location, 2018 – Addresses trading automation but not AI-specific liability
