Site icon Lawful Legal

Legal and Ethical Issues in Regulation of Artificial Intelligence in India


Author:Monalisa Chaudhari, National Law University Nagpur

Abstract

Artificial Intelligence (AI) is fast-changing industries, governments, and societies around the world. The development and deployment of this technology raise significant legal and ethical issues. These include accountability, algorithmic bias, data privacy, and liability. While countries such as the European Union are drafting specific AI regulations, India does not have an overarching legal framework to regulate this new technology. This article examines the challenge of regulating AI in India, reviews relevant global and domestic developments, examines landmark case laws, and suggests a way forward to balance innovation with societal interests.

To The Point

AI systems increasingly influence critical areas like healthcare, law enforcement, finance, and governance. But these systems carry risks: discriminatory practices because of biased algorithms, lack of transparency, and even threats to individual privacy. For India, whose digital adoption is soaring, the absence of a specific AI regulatory framework increases these risks. Current legislation such as the Information Technology (IT) Act, 2000, and emerging drafts like the Digital Personal Data Protection Bill, 2022, do not adequately address AI-specific challenges. Without proactive regulation, India risks stifling innovation, jeopardizing human rights, and undermining its vision to be a global AI hub.

The Proof
Global AI Regulation Trends
European Union: The European Union is taking the lead through its proposed Artificial Intelligence Act. It uses a risk-based approach and applies very tight rules on transparency, accountability, and compliance to high-risk AI applications, such as those in healthcare or law enforcement.


United States: The United States has been taking a sector-specific approach towards AI regulation. While there isn’t a federal AI law, different agencies are responsible for regulating AI applications in different sectors, such as health care (FDA) and self-driving cars (NHTSA). The Algorithmic Accountability Act, not yet passed, aims to provide more transparency in automated decision-making.

China:China’s regulatory strategy focuses on state control and security. The Chinese government enforces strict data laws and ethical guidelines for AI to align it with national priorities. In 2023, the government issued comprehensive rules on generative AI, which highlights its focus on monitoring AI-driven content.





AI in the Indian Context
AI adoption is growing across various industries in India, such as agriculture, healthcare, and governance. Still, there is a lack of specific legislation for AI, which leaves many issues unaddressed:
Algorithmic Bias: AI systems used in hiring, credit evaluation, and law enforcement can perpetuate societal biases. Predictive policing systems may be used to unduly target marginalized communities.
Lack of Accountability: Holding autonomous systems liable for resulting harm is still a grey area, such as in autonomous cars.
Privacy Issues: AI-based surveillance threatens to breach individual privacy, hence violating Article 21 of the Constitution.
Economic Inequality: The unequal access to AI technologies would further exacerbate the already prevalent socio-economic inequalities.

Use of Legal Jargon

AI regulation in India overlaps with fundamental constitutional principles such as right to privacy (Article 21), equality before the law (Article 14), and freedom of expression (Article 19). The lack of clear legal definitions for terms such as “autonomous systems” or “algorithmic decision-making” makes the application of tort law and jurisprudence on liability complex. Moreover, regulatory gaps raise questions about the admissibility of AI-generated evidence under procedural laws such as the Indian Evidence Act, 1872.



Case Laws


1. Justice K.S. Puttaswamy (Retd.) v. Union of India (2017):
This was a landmark judgment confirming that right to privacy forms a fundamental right in terms of Article 21 of the Indian Constitution. It signifies how it is indispensable to ensure powerful legal remedies so as not to abuse the uses of AI such as facial recognition and surveillance.
2. Shreya Singhal v. Union of India (2015):
The Supreme Court of India struck down Section 66A of the IT Act, holding it to be ambiguous and overreaching. This case is an example of how clear and precise definitions in AI-related laws are important to prevent misuse or arbitrary enforcement.
3. United States v. Loomis (Wisconsin Supreme Court, 2016):
Although not an Indian case, Loomis shows the dangers of opaque algorithms. The court upheld a sentencing decision based on an AI tool but criticized its lack of transparency. Indian courts may face similar dilemmas as AI adoption in judicial processes increases.
4. Anvar P.V. v. P.K. Basheer (2014):
This case clarified the rules in respect of admissibility of electronic evidence under the Indian Evidence Act. Increasing dependence upon AI-generated data throws further challenges on the Indian Courts in ascertaining its credibility and authenticity.


Challenges in Regulating AI in India
1. The Absence of a Cohesive Framework:
India’s existing laws, including the IT Act, 2000, and the draft Digital Personal Data Protection Bill, do not address AI-specific issues. These laws focus primarily on data protection and cybercrime but fall short in areas like algorithmic accountability or AI ethics.

2. Defining Accountability:
Who is liable when a self-governing AI system causes harm? Traditional tort law principles, such as vicarious liability and strict liability, are ill-equipped to answer this question. For example, if an AI-driven medical diagnosis tool gives incorrect results, is it the developer, the healthcare provider, or the AI system itself that should be held liable?

3. Algorithmic Bias and Discrimination:
AI systems often inherit from their training data. For instance, an AI-powered tool for hiring based on biased historic data might discriminate against women or communities, thus denying them Article 14, right to equality.

4. Privacy and Surveillance:
AI-driven technologies, such as facial recognition, drones, and predictive policing, can violate the privacy rights of individuals. For instance, the Delhi Police’s facial recognition system, designed to control crowds, raises concerns about its potential misuse and lack of accountability.

5. Transparency and Explainability:
AI algorithms are often treated as “black boxes,” with their decision-making processes not transparent. Lack of transparency creates problems for judicial review and compliance with principles of natural justice.

6. Impact on Employment:
Automation and AI may cause displacement of jobs, especially in labor-intensive industries. Though an economic issue, its legal dimensions would be a violation of labor laws and workers’ rights.

Solutions and Recommendations
1. Comprehensive AI Law
India requires a regulatory framework specific to AI that defines key terms, holds accountable, and is transparent. This law should:
Mandate algorithmic audits to detect and mitigate biases.
Introduce liability clauses for damage caused by autonomous systems.
Include safeguards of data protection and privacy.

2. Strengthen Data Protection Laws:
The Digital Personal Data Protection Bill be reformed to explicitly cover AI-driven data processing and ensure it meets international standards, such as the EU’s General Data Protection Regulation (GDPR).


3. Advance AI Ethics:
The AI ethics guidelines must be legally binding and include principles like accountability, fairness, and non-discrimination. Industry-specific guidelines can be prepared in sectors like healthcare or fintech to address sectoral challenges.

4. Encourage International Collaboration:
India must engage with global efforts in the harmonization of regulations for AI. Engaging with the OECD and getting aligned with the framework the EU has proposed under its AI Act will provide an excellent learning experience.

5. Develop Capacity-Building Programs:
AI literacy for the regulators, policymakers, and the judiciary is necessary. This can be ensured through capacity building for effective enforcement of AI laws.

6. An AI Oversight Authority
A regulatory body dedicated to AI, much like the Data Protection Authority under the Personal Data Protection Bill, can enforce compliance, redress grievances, and monitor emerging risks.





Conclusion
Artificial Intelligence offers India unparalleled opportunities as well as enormous challenges. Fostering innovation must not compromise the core rights and values of society. A well-balanced regulatory framework would ensure the development and deployment of AI technologies in a responsible manner to the advantage of all stakeholders. By learning from best practices worldwide and adapting these solutions to India’s socio-economic setting, it will be in the lead position for ethical AI governance.

FAQ
Q1. Why is AI regulation important for India?
AI regulation would ensure that the technologies deployed are done so responsibly to protect the people from the risks of privacy violations, discrimination, and job displacement while at the same time encouraging innovation.

Q2. How does AI affect fundamental rights?
Surveillance and predictive policing through AI technologies can breach privacy (Article 21) and equality (Article 14) if not well regulated.

Q3. What are the key global AI regulations India can learn from?
India can learn from the EU’s risk-based AI Act, the U.S. sector-specific approach, and China’s focus on state control and ethical guidelines.

Q4. What role can Indian courts play in AI regulation?
Indian courts can set precedents on issues like algorithmic bias, liability, and admissibility of AI-generated evidence, guiding legislative efforts.

Q5. How can AI laws address algorithmic bias?
Mandatory algorithmic audits and transparency requirements can detect and mitigate biases, ensuring fairness and compliance with constitutional principles.

Exit mobile version