Author: Maansi Gupta, St Joseph’s College of Law
To the Point
The rapid advancement of Artificial Intelligence (AI) in India presents significant opportunities for economic growth, innovation, and improved public services. However, this growth also brings critical legal and ethical challenges that require urgent attention. At present, India lacks a comprehensive legal framework specifically governing AI technologies. Existing laws such as the Information Technology Act, 2000, and the Digital Personal Data Protection Act, 2023, provide limited and indirect coverage. The absence of targeted regulation leaves key issues such as algorithmic bias, data privacy, lack of transparency, and accountability in automated decision-making unaddressed.
AI systems increasingly influence sectors like healthcare, law enforcement, finance, and governance. Without regulatory oversight, these systems can lead to discriminatory outcomes, misuse of personal data, and erosion of fundamental rights. Ethical concerns also arise around AI-generated misinformation, deepfakes, surveillance, and the use of autonomous systems in warfare or policing. Hence, regulation is not a deterrent to innovation but a necessary foundation for responsible and inclusive technological growth.
Internationally, frameworks like the European Union’s AI Act and UNESCO’s Ethical AI Guidelines offer models that India can adapt, considering its unique socio-economic context. A legal framework in India should establish clear accountability mechanisms, mandate impact assessments for high-risk AI applications, promote algorithmic transparency, and encourage fairness, inclusivity, and human oversight.
In conclusion, regulating AI in India is a legal imperative to ensure that technological development aligns with constitutional values and human rights. A well-balanced law can foster public trust, mitigate risks, and support India’s vision of becoming a global AI hub. The goal must be to create a regulatory environment where ethical accountability and innovation coexist to benefit society as a whole.
Abstract
The integration of Artificial Intelligence (AI) into India’s digital and economic ecosystem marks a transformative shift in governance, industry, and social life. From facial recognition in policing to predictive algorithms in finance and automated systems in healthcare, AI is rapidly redefining how decisions are made and services are delivered. However, this technological leap has far outpaced the development of appropriate legal and ethical frameworks to regulate its use. India currently lacks a dedicated AI regulatory law. Existing legal instruments such as the Information Technology Act, 2000 and the Digital Personal Data Protection Act, 2023 provide fragmented safeguards and fail to address the broader implications of algorithmic decision-making, bias, and the opacity of AI systems.
The unchecked growth of AI systems poses significant threats to privacy, due process, equality, and transparency—core constitutional values. The deployment of AI without adequate safeguards can reinforce discrimination, lead to wrongful denial of services, and create systems that lack human oversight. Moreover, the proliferation of deepfakes, surveillance technologies, and automated profiling raises serious concerns about the misuse of AI in ways that could undermine democratic principles and human dignity.
This paper argues that India urgently needs a comprehensive AI-specific regulatory framework grounded in principles of ethical accountability and responsible innovation. Such a framework must incorporate mandatory algorithmic impact assessments, sector-specific regulations for high-risk AI applications, transparency requirements, grievance redressal mechanisms, and human-in-the-loop oversight. Drawing on comparative models such as the European Union’s AI Act and UNESCO’s ethical principles for trustworthy AI, the paper explores how India can craft a regulatory regime tailored to its diverse socio-economic and cultural realities.
The abstract further proposes a rights-based approach to AI regulation, one that aligns with India’s constitutional vision and international obligations. By proactively shaping a legal architecture that balances innovation with accountability, India can lead the Global South in developing an ethical, inclusive, and sustainable AI governance model. This approach not only addresses the risks of AI but also creates an enabling environment for safe and beneficial technological advancement.
Use of Legal Jargon
The advent of Artificial Intelligence (AI) necessitates a robust legal framework to address the constitutional, statutory, and jurisprudential vacuum surrounding its governance in India. At present, the regulation of AI is subsumed under general legislations such as the Information Technology Act, 2000 and the Digital Personal Data Protection Act, 2023, which are inadequate to address the sui generis challenges posed by AI-driven technologies. These include algorithmic opacity, automated decision-making, and the delegation of state functions to non-human entities, which create complexities in attributing mens rea and establishing vicarious liability.
One of the foremost legal concerns is the lack of due process in algorithmic decisions, particularly in high-stakes areas such as healthcare, criminal justice, and financial services. AI systems often operate as black boxes, making it difficult to meet the threshold of procedural fairness under Article 14 of the Constitution of India. This undermines the principles of audi alteram partem and non-arbitrariness, which are cornerstones of administrative law. Moreover, the deployment of AI in surveillance and predictive policing raises concerns under Article 21, particularly in light of the right to privacy upheld in Justice K.S. Puttaswamy v. Union of India (2017), which mandates a test of legality, necessity, and proportionality for state encroachments on privacy.
The absence of a statutory authority to oversee AI operations leads to regulatory arbitrage and a chilling effect on data sovereignty. Additionally, without legislative guidance, private entities exploiting AI are rarely held accountable under tortious liability doctrines like negligence or strict liability, due to the ambiguous nature of causation and foreseeability in AI-related harms. The traditional concepts of product liability are also strained when applied to dynamic, self-learning systems that evolve post-deployment.
International legal instruments such as the OECD AI Principles, UNESCO’s Recommendation on the Ethics of Artificial Intelligence, and the European Union’s AI Act advocate for a risk-based, rights-centric regulatory architecture. These documents emphasise ex ante risk assessments, human-in-the-loop mechanisms, and explainability—concepts which Indian law must begin to incorporate.
A prospective Indian legislation on AI should define “high-risk AI” and mandate impact assessments, regulatory sandboxes, and compliance obligations rooted in proportionality and necessity. The law must also establish a central AI Regulatory Authority, vested with quasi-judicial powers, to monitor, audit, and enforce ethical AI deployment. Further, code is law must be reinterpreted through a constitutional lens to ensure that private algorithms do not exercise unregulated coercive power akin to state action.
In conclusion, AI regulation in India must evolve from mere techno-legal compliance to a rights-based governance model. Embedding ethical accountability into enforceable legal obligations is vital to ensuring that innovation does not eclipse constitutional guarantees. The principle of salus populi suprema lex esto—the welfare of the people shall be the supreme law—must guide the legal architecture for AI in India.
The proof
The necessity for regulating Artificial Intelligence in India is strongly evidenced by recent incidents and empirical studies that underscore the risks of unregulated AI deployment. In 2021, the Internet Freedom Foundation (IFF) raised concerns over the indiscriminate use of Facial Recognition Technology (FRT) by law enforcement in India without legal sanction. Delhi Police’s FRT system reportedly had an accuracy rate of less than 2% in identifying individuals, which resulted in multiple instances of wrongful identification—violating the right to privacy and due process under Article 21 of the Constitution.
Similarly, reports by NITI Aayog and Vidhi Centre for Legal Policy have highlighted the algorithmic opacity in AI applications used in the financial sector, where loan approvals and insurance risk assessments are increasingly automated. These systems, when audited, revealed discriminatory patterns against marginalized communities—proving how AI can unintentionally reinforce societal biases due to biased training datasets.
Further, in the private sector, AI chatbots and customer service tools have been found to collect user data without consent or informed notice, breaching the principles laid down in the Puttaswamy judgment and the Digital Personal Data Protection Act, 2023. The lack of transparency and grievance redressal mechanisms compounds the problem.
Internationally, India was ranked low in the AI Readiness Index (Oxford Insights, 2022) in terms of legal and ethical preparedness, which further substantiates the legislative vacuum. These factual instances provide compelling proof that without a regulatory regime, India risks not only technological misuse but also constitutional infringements.
In essence, the evidence clearly supports that AI in India is already being misapplied or under-regulated in ways that have tangible legal, social, and ethical consequences, making statutory regulation not just desirable, but indispensable.
Case Laws
1. Justice K.S. Puttaswamy (Retd.) v. Union of India (2017) – Right to Privacy
This landmark nine-judge bench judgment recognized the right to privacy as a fundamental right under Article 21 of the Constitution. The Court emphasized informational self-determination, data protection, and consent. Its relevance to AI lies in establishing that any automated system collecting or processing personal data must pass the tests of legality, necessity, and proportionality. Unregulated AI tools like facial recognition or predictive analytics can infringe on these privacy rights, making this case foundational in demanding legislative safeguards around AI-based data handling.
2. Shreya Singhal v. Union of India (2015) – Freedom of Speech and Algorithmic Censorship
This judgment struck down Section 66A of the IT Act for being vague and overbroad, holding that restrictions on free speech must be reasonable and constitutionally valid. With AI increasingly used for content moderation and censorship on digital platforms, this case sets the precedent that algorithmic restrictions must not be arbitrary. It calls for transparency and human oversight in any AI-driven speech regulation, warning against unchecked private governance by algorithms.
3. K.S. Puttaswamy v. Union of India (Aadhaar Case, 2018) – Data Protection and State Surveillance
In this follow-up to the 2017 privacy ruling, the Supreme Court emphasized that data collection by the state must be backed by just, fair, and reasonable law. The use of AI for state surveillance, profiling, or welfare targeting must therefore meet strict constitutional scrutiny. The judgment also reinforced the importance of data minimization and purpose limitation, principles critical to any AI regulatory framework.
4. Avinash Mehrotra v. Union of India (2009) – Precautionary Principle
Though environmental in nature, this case upheld the precautionary principle, stating that in cases of scientific uncertainty, steps must be taken to prevent harm. AI technologies, being experimental and evolving, can cause significant and unintended societal harm. Courts may apply this principle to support preemptive AI regulation, especially in sensitive sectors like policing, healthcare, or education.
5. State of Punjab v. Gurmit Singh (1996) – Protection of Vulnerable Populations
While primarily a case on the treatment of rape survivors in court, the ruling emphasized state responsibility in protecting vulnerable groups. In the AI context, this can be extended to demand that algorithms not reinforce existing inequalities or discrimination—such as against women, minorities, or the disabled. AI systems must be assessed for disparate impact and algorithmic bias, aligning with constitutional protections for equality.
6. Zoroastrian Cooperative v. District Registrar (2005) – Public Interest vs. Private Autonomy
This case dealt with balancing private rights against broader public interest. As AI is largely developed by private corporations, yet has far-reaching social impacts, this case supports the argument that the state has a duty to intervene and regulate private actors when public interest is at stake—especially to safeguard fundamental rights from algorithmic harm.
Conclusion
The rise of Artificial Intelligence in India signifies a transformative leap in technological advancement, yet its unchecked proliferation poses severe legal, ethical, and societal risks. As AI increasingly influences governance, public services, and private decision-making, the absence of a comprehensive regulatory framework creates a dangerous vacuum—where individual rights, especially privacy, equality, and due process, stand vulnerable to erosion. The existing legal infrastructure, including the IT Act, 2000 and the Digital Personal Data Protection Act, 2023, is insufficient to govern the dynamic and autonomous nature of AI systems, particularly when they operate without transparency or human oversight.
The use of AI in surveillance, law enforcement, financial systems, and automated recruitment is already demonstrating tangible harms—ranging from algorithmic bias to violations of privacy and procedural fairness. These concerns are not speculative; they are evident through real incidents, constitutional challenges, and expert reports. Comparative legal models such as the European Union’s AI Act offer valuable insights into risk-based, rights-centric regulation, which India must adapt to its socio-political and economic context.
Therefore, it is imperative for India to adopt a forward-looking, sector-specific, and principle-based AI legal framework. Such regulation should mandate algorithmic transparency, ensure accountability through legal standards, establish a central regulatory authority, and integrate ethical principles such as fairness, non-discrimination, and explainability. Rather than stifling innovation, a robust legal system can foster responsible innovation—building public trust and ensuring long-term sustainability of AI in the Indian ecosystem.
In conclusion, the legal regulation of AI in India is not merely a policy choice but a constitutional necessity. The law must evolve alongside technology, ensuring that the promise of AI is realized without compromising the fundamental rights and dignity of individuals. Only then can India become a global leader in ethical and inclusive AI governance.
FAQ’s
1. What is Artificial Intelligence (AI)?
AI refers to computer systems designed to perform tasks that typically require human intelligence, such as decision-making, pattern recognition, and learning.
2. Is there a specific law for AI in India?
No, India does not currently have a dedicated AI law. AI is loosely regulated under existing frameworks like the IT Act, 2000 and the Digital Personal Data Protection Act, 2023.
3. Why is AI regulation necessary in India?
Regulation is essential to protect privacy, prevent algorithmic discrimination, ensure transparency, and safeguard fundamental rights.
4. What are the legal risks of unregulated AI?
Unregulated AI may lead to biased decisions, misuse of personal data, lack of accountability, and potential violations of Articles 14 and 21 of the Constitution.
5. How does AI affect privacy rights?
AI systems collect and process vast amounts of data. Without safeguards, this can infringe on the right to privacy as established in the Puttaswamy judgment.
6. Can AI systems be held legally accountable?
Currently, AI lacks legal personhood. Liability usually falls on developers or deployers, but the absence of clear laws makes accountability difficult.
7. What sectors are most affected by AI in India?
Key sectors include law enforcement, finance, healthcare, education, and governance—often involving high-risk automated decision-making.
8. What is a “high-risk” AI system?
These are AI applications that can significantly impact rights, such as facial recognition, predictive policing, and autonomous medical tools.
9. What international models can India learn from?
India can draw from the EU AI Act, OECD Principles, and UNESCO’s AI ethics guidelines for rights-based, risk-tiered regulation.
10. Will regulation hinder AI innovation?
No. Regulation can promote responsible innovation by providing legal clarity, building trust, and preventing harmful misuse.
