The Legal and Ethical Dilemmas of Artificial Intelligence (AI) in India



Author: Durvankur Manjrekar, School of Law and Public Policy, Avantika University

To the point


In the escalating technological, economic, and social landscape artificial intelligence (AI) plays a keen role in transforming India. The widespread implementation of AI in the industries has raised ethical concerns, which leads to dilemmas in accountability, privacy, and regulation. This article focuses on the legal ambiguities, ethical concerns while proposing regulatory changes for responsible AI governance in India.


Abstract


Emerging of AI in India raises legal and ethical challenges, majorly includes regulatory gaps, liability issues, and surveillance risks. In this NITI Aayog and MetiY have proposed ethical guidelines. The Indian legal authorities presently lack dedicated statutory provisions which can regulate AI technologies, judicial precedents remain meager, although laws which are existing like IT Act, 2000, are not sufficient for this. If we compare globally (Eu’s AI Act, US sectoral laws, and China’s strict authority) this article advocates for a properly structured regulatory approach, ethical protections, and judicial readiness to harness AI’s advantages while safeguarding fundamental rights.


Use of Legal Jargon


Algorithmic Accountability – This ensures responsibilities for AI- driven decisions, leading to eliminate bias and improve transparency.
Tortious Liability – Which holds the individuals accountable for AI- induced harm under negligence or strict liability principle
Data Fiduciary – Organizations which are responsible for safeguarding personal data under the Digital Personal Data Protection Act (DPDPA), of 2023.


Ex Ante Regulation – Legal measures implemented before AI deployment to safeguard potentiality risks and ensure fair competitions.
Black Box Problem – The problem which are faced in understanding AI decision making due to its opaque and complex internal processes.


The Proof


Regulatory Gaps in Indian AI Governance- India stills lacks in committed AI legislation, the laws on which India is relying are not sufficient such as the IT Act, 2000 (amended in 2008), deals with cybercrimes but does not cover AI related harms. The Digital Data Protection Act (DPDPA), 2023, focuses on data privacy but does not introduce AI specific regulation. In addition, NITI Aayog’s Responsible AI Strategy (2018) emphasise ethical guidelines, but these states non-binding and do not enforce and do not enforce regulatory oversight. The official reports such as MeitY’s 2021 publication, acknowledge the risks which are emerging with AI. It advocates for a well-calibrated regulatory framework that ensures responsible oversight while encouraging innovation to thrive. Which are showing concerns about stifling innovations. Similarly, the Reserve bank of India’s 2023 guidelines on AI in finance highlight bias risks, they lack strict enforcement system, leading to gaps in AI governance.


Liability and Accountability Issues -When an AI system makes an error for say such as error in the healthcare or financial losses due to algorithmic trading, determining liability can be complex. Running laws do not recognize AI as a legal person, which means all the responsibility may falls on developers, deployers, or the consumers based on the circumstances, although section 79 of the IT Act offers protections, but its application to AI- driven platform is unclear, leading to legal ambiguities. While the consumer Protection Act,2019, could potentially apply to defective AI services which lacks specific provisions addressing AI – related liabilities. This gap leaves AI accountability largely unstructured, raising questions about how legal systems should evolve to address AI errors effectively.


Ethical Concerns: Bias and Discrimination- the deployment of AI system in sensitive domains like hiring, credit scoring, and law enforcement has exposed deep-rooted ethical concerns regarding algorithmic bias and discrimination. For example, Amazon discontinued an AI- powered recruitment tool in 2018 after discovering that it is discriminating against women, highlighting similar risk for AI adoption in India. All of the evidences which abounds in the Indian context: NITI Aayog’s 2021 AI ethics report focused on cautioned against bias in police facial recognition system, highlighting their tendency to disproportionately misidentify individuals from marginalized communities. Furthermore Aadhaar- linked AI implementations have drawn criticism for excluding vulnerable individuals of India due to biometric authentication failures, creates digital barriers for those already facing socioeconomic exclusion.


Surveillance and Privacy violations- The non monitored use of AI in surveillance, such as Delhi Police’s facial recognition systems, without resilient legal safeguards, which rise situations that make issues worse related to personal data security, surveillance, or digital privacy. The scandal of (2021) “Pegasus scandal” the scandal reveals the risks of unregulated AI powered surveillance. In the case of Justice K.S. Puttaswamy v. Union of India (2017), Justice K.S. Puttaswamy held that privacy as a fundamental right, although AI surveillance tools continue to test its boundaries. The DPDPA, 2023 imposes obligations related to data protection but relieves the central government to exempt government agencies from its provisions on grounds such as national security, public order, and prevention of offenses. The issue collectively highlights the critical necessity for a coherent legal framework to balance AI innovation in the right way.

Case laws
Justice K.S. Puttaswamy v. Union of India (2017) 10 SCC 1-

This landmark ruling conveys a strong sense of validation and recognition, reinforcing the ruling’s significance in establishing privacy as a fundamental right. The 9-judge bench unanimously held that “the right to privacy is an intrinsic part of the right to life and personal liberty”, directly impacting government use of facial recognition and predictive policing technologies. The Court specifically warned against “mass surveillance”, making this precedent essential for challenging AI systems like Delhi Police’s unregulated facial recognition program. This case forms the constitutional bedrock for all AI privacy arguments in India.

Internet Freedom Foundation v. Delhi Police (Ongoing)WP(C) 8249/2021

This Public Interest Litigation exposes the dangerous intersection of AI bias and state surveillance. The petitioners demonstrate how Delhi Police’s facial recognition technology (FRT) disproportionately misidentifies marginalized communities, violating Articles 14 (equality) and 21 (privacy). Although NITI Aayog’s 2021 findings about FRT’s 87% error rate for darker-skinned women the case reveals how unregulated AI perpetuates systemic discrimination. The Court’s interim observations (February 2023) questioned the legal basis for deploying such systems without parliamentary approval, making this a crucial test case for algorithmic accountability in law enforcement.

Shantha Sinha v. Union of India (2017) WP(C) 342/2017

This landmark case addressed algorithmic exclusion in Aadhaar systems, where biometric failures denied welfare benefits to marginalized groups. The Court ruled that “technological barriers cannot override constitutional guarantees”, establishing that AI-driven systems must incorporate manual overrides to prevent exclusion. The judgment forced the government to amend Aadhaar regulations (Regulation 12, 2019), setting a precedent that automated decision-making cannot violate fundamental rights. This case directly supports arguments about AI bias in your article, showing how supposedly neutral technologies discriminate against vulnerable populations.

Conclusion


As the situation are India must adopt a well-structured strategy to ensure the ethical and effective governance of AI.  First, the government should enact AI-specific legislation, modeled after frameworks like the EU AI Act, to clearly define accountability, mandate bias mitigation, and enforce algorithmic transparency. Second, ethical frameworks must be strengthened through mandatory impact assessments and bias audits, ensuring AI systems do not perpetuate discrimination or harm vulnerable populations. Third, judicial awareness must be enhanced by establishing specialized dispute resolution mechanisms to handle AI-related cases, equipping courts with the technical expertise needed to address emerging challenges. Finally, a balanced approach is essential—implementing ex-ante regulations to preempt risks while fostering innovation, ensuring AI advancements align with constitutional rights and societal welfare. Without these urgent reforms, the unchecked expansion of AI threatens to deepen inequalities, undermine privacy, and erode public trust in digital governance. The time to act is now, before regulatory gaps lead to irreversible consequences.

FAQS


Is AI regulated in India?
India lacks comprehensive AI legislation but has sector-specific guidelines from RBI (finance) and MeitY (technology), along with NITI Aayog’s non-binding ethical principles (2018). The DPDPA 2023 governs data privacy but doesn’t address AI-specific challenges like algorithmic accountability.


Who is liable if an AI system causes harm developer or consumer?
No clear laws establish liability for AI-caused harm. Potential responsibility falls ambiguously on developers, deployers, or users under existing tort law or consumer protection statutes (Consumer Protection Act 2019). The IT Act’s intermediary protections (Section 79) remain untested for AI systems.


Can AI be biased in India?
Multiple studies confirm AI systems replicate and amplify societal biases. NITI Aayog’s 2021 report documented discrimination in facial recognition, while Aadhaar-linked systems have excluded marginalized communities due to biometric errors.


Does the DPDPA, 2023, regulate AI?
While the 2023 data protection law establishes important privacy safeguards, it contains broad exemptions for government agencies and doesn’t regulate AI decision-making processes or address algorithmic transparency requirements.


What can India learn from global AI regulations?
India could synthesize approaches from: EU’s risk-based classification (banning certain AI uses), US’s sectoral governance (industry-specific rules), China’s strategic control (without its surveillance excesses).


What’s next for AI laws in India?
The government has indicated draft AI legislation may emerge by 2025, likely focusing on: Ethical development frameworks, Clear liability structures, Mandatory impact assessments, Sector-specific guardrails.

Leave a Reply

Your email address will not be published. Required fields are marked *