Author: Yash Suresh Khiste, The Manikchand Pahade Law College, Chhatrapati Sambhaji Nagar, Maharashtra
Abstract
Artificial Intelligence (AI) has transformed data collection, analysis, and decision-making across sectors. However, the growing use of AI in India raises serious concerns about the protection of personal data, consent, algorithmic bias, and surveillance. India lacks a comprehensive legal framework regulating AI or ensuring individual privacy against automated decision-making. This article examines the interplay between emerging AI technologies and the existing privacy legal framework in India, including constitutional protections and the proposed Digital Personal Data Protection Act, 2023. The paper also evaluates international standards and provides suggestions for a robust AI law in India.
Introduction
Artificial Intelligence is revolutionizing industries like finance, healthcare, and law enforcement. These technologies often depend on massive personal data sets. In India, where digital infrastructure is rapidly expanding, the legal safeguards to ensure privacy in AI processing are still developing. Despite the recognition of privacy as a fundamental right (Justice K.S. Puttaswamy case), there is no specific legislation governing the ethical use of AI. This article explores the gap between the growing influence of AI and the absence of privacy-specific AI regulation.
Main Body
Legal Background in India
Constitutional Right to Privacy: Article 21 (Life and Liberty).
Justice K.S. Puttaswamy v. Union of India (2017): Privacy declared a fundamental right.
IT Act, 2000: Limited to cyber security and data protection but outdated.
Digital Personal Data Protection Act, 2023 (DPDP Act): Still lacks AI-specific regulation.
Key Concerns
Lack of Consent Mechanism in AI systems.
Opaque Algorithms: No explanation or transparency in AI decisions.
Surveillance Risks: Use of facial recognition and data profiling.
Bias and Discrimination: Automated decision-making leading to exclusion or unfair treatment.
Global Perspective
EU AI Act: Risk-based regulatory framework.
OECD AI Principles: Transparency, accountability, human rights focus.
US AI Bill of Rights (Draft): Ethical and people-centered approach.
Legal Loopholes
No definition or regulation of automated decision-making in Indian law.
DPDP Act, 2023 does not cover algorithmic accountability or AI ethics.
No regulatory authority for AI standards or redressal mechanism.
Case Laws
K.S. Puttaswamy v. Union of India (2017) – Established right to privacy.
Internet and Mobile Association of India v. RBI (2020) – Highlights data economy and regulatory overreach.
Justice B.N. Srikrishna Committee Report (2018) – Suggested principles for data protection.
Suggestions
Introduce AI-specific legislation with ethical use, accountability, and human oversight.
Define terms like algorithmic decision-making, data profiling, and automated systems.
Establish an AI regulatory authority.
Incorporate human rights principles into tech law.
Conclusion
As India progresses in digital and AI technologies, it must balance innovation with individual rights. The current legal vacuum around AI and privacy may lead to violations of autonomy and dignity. A proactive legal framework is the need of the hour to govern AI fairly, ethically, and constitutionally.
FAQS
Q1: Does India have an AI law?
No. India has no dedicated law regulating AI as of 2024.
Q2: What protects privacy in AI usage in India?
Currently, constitutional protection (Article 21) and the DPDP Act 2023 provide limited coverage.
Q3: Can AI decisions be challenged in court?
Legally, yes—but only if the individual knows AI was involved, which is a grey area without AI-specific laws.
Q4: What is the global standard for AI regulation?
The EU AI Act and OECD Principles are considered leading frameworks.
References
Justice K.S. Puttaswamy v. Union of India (2017)
Digital Personal Data Protection Act, 2023
Srikrishna Committee Report, 2018
EU Artificial Intelligence Act (2021 Draft)
OECD AI Principles