Regulation of Artificial Intelligence: Need for an Indian Legal Framework

Author: Samarth Jeet, Geeta Institute of Law, Panipat


To the Point
Artificial Intelligence (AI) is rapidly reshaping India’s socio-economic fabric, revolutionizing sectors such as healthcare, finance, agriculture, education, law enforcement, and public governance. From AI-powered diagnostic tools in hospitals to predictive algorithms in policing, the integration of intelligent systems promises greater efficiency, accuracy, and accessibility. Startups and government initiatives like the National AI Mission are pushing India toward becoming an AI-driven economy. However, this technological acceleration has outpaced the development of legal and regulatory mechanisms necessary to govern the ethical, fair, and accountable use of AI. The absence of a dedicated legal framework leaves a critical gap in the Indian legal system, exposing individuals and institutions to risks ranging from privacy violations and algorithmic discrimination to legal uncertainty and lack of remedy.
Currently, India’s legal apparatus—including the Information Technology Act, 2000 and the recently enacted Digital Personal Data Protection Act, 2023—addresses only fragments of the AI landscape, such as data handling and cybersecurity. These laws are not equipped to manage the nuanced challenges posed by autonomous decision-making, algorithmic opacity, and machine-generated actions. As AI systems continue to evolve and become more autonomous, questions arise regarding liability, fairness, transparency, and human rights. Without a comprehensive AI-specific legal regime, there is a danger that innovation may proceed unchecked, leading to social and legal harm. Therefore, India must urgently construct a structured, forward-thinking legal framework tailored to its constitutional ethos and technological ambitions to strike the right balance between innovation and protection.


Abstract
The exponential growth of Artificial Intelligence (AI) technologies in recent years has ushered in a new era of digital transformation across the globe, and India is no exception. From automating mundane tasks to enabling predictive analytics in critical sectors like healthcare, education, finance, and governance, AI promises increased efficiency and innovation. It is already being used in e-governance systems, judicial decision-making tools, biometric surveillance, and smart agriculture. However, as these intelligent systems become more autonomous and integrated into decision-making structures, they raise significant concerns related to accountability, transparency, and fairness. These concerns are not just technological—they are deeply legal, ethical, and constitutional in nature.
India’s current legal framework is largely inadequate to deal with these complexities. The Information Technology Act, 2000 and the newly enacted Digital Personal Data Protection Act, 2023 were not drafted with the sophistication and autonomy of modern AI systems in mind. These statutes address issues like cybersecurity and personal data privacy but fail to encompass the broader ramifications of AI, such as algorithmic discrimination, lack of explainability (the “black box” problem), and liability for harm caused by autonomous decisions. Moreover, Indian jurisprudence has yet to evolve principles specifically suited for AI, leaving a vacuum in terms of rights, responsibilities, and remedies. The result is a fragmented regulatory environment that cannot provide adequate protection to citizens nor legal certainty to developers, businesses, and government agencies deploying AI tools.
In light of these challenges, there is an urgent need for a comprehensive and forward-looking AI-specific regulatory framework in India. Such a framework must not only align with global best practices and comparative legal standards but must also be firmly rooted in the Indian constitutional ethos—especially the principles of equality, liberty, and dignity. It should define the scope of lawful AI use, set standards for algorithmic transparency and fairness, prescribe liability for harms, and ensure meaningful human oversight in critical decision-making processes. By adopting a rights-based and risk-sensitive approach to AI governance, India can protect its citizens while encouraging responsible innovation and international competitiveness in the AI domain.


Use of Legal Jargon
AI systems, particularly those powered by machine learning and deep learning, exhibit a degree of autonomy where decisions are made without explicit programming for each scenario. This evolution poses challenges to traditional legal doctrines like actus reus (guilty act) and men’s rea (guilty mind). When an AI commits an act that causes harm—be it medical misdiagnosis, financial fraud, or discriminatory recruitment decisions—it becomes unclear who is to be held legally accountable. The absence of human intention or foreseeability complicates liability attribution under current jurisprudence.
The “black box” nature of many AI models refers to their non-transparency—where even developers may not fully understand how the system arrived at a certain decision. This violates the principle of Audi alteram partem—a foundational tenet of natural justice that entitles every person the right to be heard. When decisions affecting rights (e.g., denial of benefits or job offers) are made by AI, it becomes difficult to contest or even understand the rationale. Without explainability, legal challenges and procedural fairness remain elusive.
Borrowed from the field of bioethics, these twin principles dictate that technologies should not cause harm (non-maleficence) and should actively promote human welfare (beneficence). In AI ethics, these principles translate to confirming that systems are safe, reasonable, and allied with human morals. Unregulated AI systems that enable surveillance, infringe privacy, or result in biased decisions violate these core ethical principles.
Regardless of intent or carelessness, strict liability makes a person liable for injury caused on by their actions or goods. Vicarious liability, makes one party accountable for the actions of another. For example, an employer being legally responsible for an employee’s actions. With AI, especially in autonomous decision-making, these doctrines are tested: Should a hospital be held liable for harm caused by an AI diagnostic tool? Should liability fall on developers, manufacturers, or users? The existing framework lacks clarity on how to assign liability when the actor is an algorithm.


The Proof
India does not currently have a dedicated law governing Artificial Intelligence. The Information Technology Act, 2000, was enacted before the rise of contemporary AI technologies and lacks provisions on autonomous systems, algorithmic decision-making, or AI ethics. Although the Digital Personal Data Protection Act, 2023 is a significant step in safeguarding personal data, it addresses AI only peripherally. AI implicates far more than data protection—it touches upon employment rights, surveillance, consumer protection, intellectual property, and constitutional liberties. This legal vacuum hampers both accountability and innovation.
Numerous studies have demonstrated that AI systems, particularly in areas like facial recognition, predictive policing, and hiring, can reinforce or amplify existing biases in society. This occurs because AI models are trained on past data, which may carry fixed biases. For instance, a facial recognition tool used by law enforcement might have higher error rates for minority communities. Without statutory safeguards mandating bias audits, fairness assessments, and algorithmic transparency, such systems could institutionalize discrimination. This goes against Articles 14, 15, and 21 of the Indian Constitution, which guarantee equality, non-discrimination, and protection of life and liberty.
AI systems, irrespective of their complexity, are not acknowledged as lawful individuals under Indian law. Hence, they cannot be sued or held criminally liable. The absence of a doctrine for assigning responsibility in such cases creates serious ambiguity in civil and criminal litigation involving AI.
Indian law enforcement agencies are increasingly deploying AI tools such as facial recognition systems, predictive policing algorithms, and drone surveillance for public security. While these tools may enhance efficiency, their unregulated use raises red flags. The deployment of AI surveillance tools without legislative authorization or judicial oversight is contrary to this standard and poses grave risks to civil liberties.


Relevant Case Laws
1. Justice K.S. Puttaswamy v. Union of India (2017)
The Supreme Court declared privacy a fundamental right under Article 21. The judgment emphasized informational privacy, dignity, and consent in the digital era.
Relevance: AI’s collection and processing of personal data—often without meaningful consent or oversight—must be scrutinized under this judgment. Algorithmic decisions that affect individuals’ rights need to be backed by law and subject to judicial review.
2. State of Maharashtra v. Praful Desai (2003)
The Court held that recording evidence via video conferencing was legally valid, recognizing the role of technology in the justice system.
Relevance: This case set a precedent for technology adoption in legal processes, paving the way for AI-assisted tools in the judiciary. However, it also underscores the need for procedural fairness, which must be preserved even when AI is involved in decision-making.
3. Google India Pvt. Ltd. v. Visaka Industries (2009)
The Andhra Pradesh High Court held that intermediaries like Google could be held liable if they failed to remove unlawful content after being notified.
Relevance: In the context of AI-generated content, this case raises concerns about intermediary liability. For instance, if a chatbot spreads defamatory or illegal content, can the platform be held responsible? There is a need to revisit intermediate accountability in the age of AI.


Conclusion
India is at a pivotal moment where the regulation of Artificial Intelligence must be treated as a national priority. The absence of a comprehensive AI law not only exposes citizens to the risks of surveillance, bias, and data misuse but also creates uncertainty for developers and investors. AI’s ability to reshape society is undeniable—but without legal checks, it may become a tool of oppression rather than empowerment.
India must craft a forward-looking, rights-based AI legal framework that draws from constitutional values, promotes innovation, and incorporates international best practices. Key aspects should include mandatory algorithmic transparency, periodic bias audits, clarity on liability, and redress mechanisms for individuals affected by AI decisions. Additionally, independent regulatory bodies should be empowered to audit, investigate, and enforce compliance. Only through such an approach can India ensure that AI serves its people, upholds the rule of law, and becomes a force for inclusive progress.


FAQs
1. Does India currently have a law specifically regulating Artificial Intelligence?
No. India does not have a stand-alone AI regulation law. Existing laws like the IT Act and the Digital Personal Data Protection Act offer only fragmented coverage.
2. What are the key challenges in regulating AI in India?
The major challenges include determining liability for autonomous decisions, protecting against algorithmic bias, ensuring data privacy, and balancing innovation with regulation.
3. Why is algorithmic bias a concern?
AI systems can inherit and amplify human biases present in data, leading to discriminatory outcomes—especially in critical areas like hiring, policing, or financial services.
4. Can AI be held legally accountable under Indian law?
Currently, AI cannot be held accountable as it lacks legal personhood. Liability must be pinned on developers, users, or corporations, but the law does not clearly address this.
5. What sectors in India are increasingly adopting AI?
AI is being widely adopted in healthcare, agriculture, law enforcement, education, fintech, e-governance, and judicial support services.
6. How can India ensure ethical AI development?
By formulating a legal framework that incorporates principles like fairness, accountability, transparency, and non-discrimination, along with stakeholder consultations and global best practices.
7. Is facial recognition technology legally valid in India?
There is no specific law regulating facial recognition. Its use by law enforcement without proper legal backing may violate the right to privacy under Article 21, as held in the Puttaswamy case.

Leave a Reply

Your email address will not be published. Required fields are marked *