Author: Kanak, a student of Sgt university
Abstract
Artificial Intelligence (AI) is increasingly shaping governance, industries, and everyday life in India. Yet, the country still lacks a comprehensive legal framework tailored to AI. This absence has created a regulatory void in crucial domains such as privacy, liability, intellectual property, and accountability. This paper explores these shortcomings, examines international approaches, and suggests that instead of being a weakness, this gap can become a strategic opportunity. India can seize this moment to design a rights-based, innovation-friendly AI law that safeguards its citizens while encouraging technological growth.
To the Point
India does not currently have an AI-specific legal framework. Existing laws such as the Information Technology Act, 2000, and the Digital Personal Data Protection Act, 2023, are not sufficient for AI regulation.
Risks of unregulated AI include: privacy violations, algorithmic bias, absence of accountability, misuse of deepfakes, and potential threats to democratic values.
A risk-based framework—similar to the European Union’s AI Act—offers a viable model but must be adapted to India’s constitutional and socio-economic context.
Indian courts, through landmark rulings like Puttaswamy (privacy) and Shreya Singhal (free speech), have already highlighted constitutional safeguards that can guide AI lawmaking.
Use of Legal Jargon
Regulatory Vacuum → The absence of AI-specific laws in India.
Algorithmic Accountability → The principle of assigning liability for harms caused by AI-driven decisions.
Autonomous Liability Doctrine → A proposed framework to hold AI developers or deployers responsible for damages caused by autonomous systems.
Data Sovereignty → India’s right to control and regulate citizens’ data processed by AI technologies.
Risk-Based Regulation → A system that differentiates AI use cases based on risk levels, e.g., high-risk (healthcare, policing) vs. low-risk (chatbots, digital assistants).
The Proof
Fact: No AI-specific legislation exists in India as of 2025.
Policy Direction: NITI Aayog’s Responsible AI for All (2021) emphasized fairness, explainability, accountability, and transparency.
International Benchmarks:
EU AI Act (2024): A detailed risk-based framework regulating high-risk AI applications.
US AI Bill of Rights (2022): A rights-based approach emphasizing privacy, transparency, and protection from discrimination.
China’s AI Rules: A heavily government-controlled model focused on surveillance and state priorities.
India’s Strategic Edge: Unlike early adopters, India has the flexibility to design a hybrid framework that integrates safeguards with an innovation-first approach.
Case Laws
Justice K.S. Puttaswamy v. Union of India (2017): Recognized the Right to Privacy as a fundamental right under Article 21, requiring AI governance to uphold this principle.
Shreya Singhal v. Union of India (2015): Struck down vague provisions in the IT Act, underscoring the dangers of overbroad and unclear regulation—an important lesson for AI policy.
Anuradha Bhasin v. Union of India (2020): Introduced proportionality as a standard for restricting technology, which can be applied to AI oversight.
(Though no AI-specific case laws exist in India yet, these judgments lay a constitutional foundation for shaping future AI laws.)
Conclusion
India is currently in a state of legal uncertainty when it comes to AI regulation. However, this vacuum is not merely a gap—it is also a strategic opening. By introducing a Digital India AI Act, the country can:
Establish global standards for ethical and inclusive AI governance.
Create a rights-based framework aligned with constitutional principles such as privacy and free speech.
Implement risk-sensitive regulation, distinguishing between high-risk and low-risk AI systems.
Adopt a forward-looking legal structure that anticipates rapid technological changes rather than playing catch-up.
If carefully designed, such a law will not only safeguard citizens’ rights but also ensure that AI is harnessed as a force for empowerment, innovation, and equitable progress rather than as a tool of exploitation or unchecked risk.
FAQ
Q1. Does India have an AI law?
No. India has no dedicated AI regulation as of 2025. Existing laws (IT Act, 2000; DPDP Act, 2023) cover related issues but are inadequate.
Q2. Why is AI regulation important in India?
Because of risks like privacy violations, bias, accountability gaps, job displacement, and national security threats.
Q3. What global models can India follow?
The EU AI Act (2024) for risk-based regulation, US AI Bill of Rights (2022) for human rights, and OECD principles.
Q4. What role does the judiciary play?
Constitutional principles like privacy, free speech, and proportionality will shape judicial scrutiny of AI laws.
Q5. Is AI an opportunity or threat for India?
Both. Without regulation, AI can be misused. With proper laws, India can emerge as a global leader in ethical AI governance.
