The Legal Labyrinth of Artificial Intelligence: Navigating Liability and Ethics in the Age of AI


Author: Avinash Pandey, IILM University


Abstract

Artificial Intelligence (AI) is no longer a futuristic concept—it’s now an integral part of our daily lives, transforming how we work, communicate, and govern. However, this evolution brings forth numerous legal and ethical challenges.From questions about accountability when AI causes harm to the need for privacy safeguards and transparency, the law struggles to keep pace. This article dives into the current legal landscape surrounding AI in India, drawing insights from domestic case law and global approaches. It emphasizes the urgent need for comprehensive regulation to strike the right balance between innovation and protection.


To the Point

In India and across the world, AI is already being used to automate judicial tasks, diagnose illnesses, and even predict crimes. Despite this rapid adoption, the legal framework remains scattered and outdated. Several critical questions demand answers:


1. Who is Liable? If an AI-enabled system makes a harmful decision—say, an autonomous vehicle causes an accident—who bears the responsibility? The developer, the owner, or the manufacturer?


2. Is AI Capable of Bias? Unfortunately, yes. AI can mirror the prejudices of its training data. Tools used in hiring or facial recognition have shown discriminatory outcomes, posing serious threats to equality rights.


3. What About Data Privacy? AI needs data to function. But without clear consent protocols and oversight, personal data can be misused. The Digital Personal Data Protection Act, 2023 addresses data rights, but doesn’t yet account for AI’s complexity.


4. Should AI Have Legal Status? There’s a debate about granting AI some form of legal personality—like corporations enjoy—to resolve liability concerns. However, critics contend that this may undermine human accountability.


5. Can AI Create Copyrighted Work? AI now generates art, music, and writing. But who owns it? The user who prompted the AI or the software developer?


6. Can We Understand How AI Decides? Most AI systems work like “black boxes,” giving no explanation for their outputs. This lack of transparency is dangerous in sectors like healthcare, finance, and law.


Legal Terms in Use:
Mens Rea: The mental intention to commit a crime—a tough concept to apply to AI, which doesn’t think like humans.


Strict Liability: Holds someone responsible for harm regardless of intent. It could be a model for AI-related injuries.


Respondent Superior: Makes employers responsible for employee actions. Could AI developers or owners be held similarly accountable?


Lex Informatica: Refers to legal norms emerging from digital practices—highly relevant in the AI era.


Due Diligence: The effort to avoid harm. Essential for developers who deploy AI systems.
Vicarious Liability: When one party is held liable for the actions of another. A concept courts may explore with AI use.


The Proof:


As reported by PwC, artificial intelligence has the potential to contribute approximately $15.7 trillion to the worldwide economy by the year 2030. In India, it’s already revolutionizing sectors like:


Traffic Management: AI cameras issue automated challans.


Banking: Loan eligibility is increasingly assessed through AI tools.


Healthcare: Diagnostic tools based on AI assist doctors in identifying diseases early.


Policing: In states such as Uttar Pradesh, predictive policing is currently undergoing testing.


Judiciary: The Supreme Court has launched SUPACE, an artificial intelligence tool designed to assist judges in conducting case research.
Yet, India lacks a dedicated law to govern AI.

Instead, AI-related issues are scattered across various laws:


The IT Act, 2000: Designed for digital crimes and e-commerce, but doesn’t cover AI complexities.


The DPDP Act, 2023: A solid start for data protection but still evolving.
Consumer Protection Act, 2019: Could potentially be used to address harm caused by AI-driven services.


Current Challenges:
There is no regulatory framework in place to evaluate AI prior to its deployment in real-world scenarios.


Absence of ethical guidelines for developers.
Inadequate enforcement mechanisms when AI goes wrong.


Extended Issues of Algorithmic Discrimination: One of the biggest dangers posed by AI is its potential to perpetuate systemic bias. For example, Amazon once scrapped an AI hiring tool after it was found to discriminate against women.

Similarly, facial recognition systems in the U.S. It has been discovered that individuals of color are misidentified at significantly higher rates. In a diverse country like India, such biases could deepen existing social inequalities if not regulated. There’s a pressing need for bias testing and the inclusion of diverse datasets during the development phase.


Cybersecurity Concerns and Deepfakes: AI also introduces new threats in the form of deepfakes—AI-generated fake videos or audio clips that can convincingly mimic real people. Deepfakes have already been used to spread misinformation, commit fraud, and even harass individuals by placing them in fabricated content. Without laws targeting deepfake creators, such misuse could spiral out of control. India’s IT Rules 2021 do address misleading content but fall short when it comes to proactively regulating deepfake technologies.


Intellectual Property in the Age of AI: The challenge of intellectual property (IP) law in AI doesn’t stop at content ownership. AI models frequently utilize copyrighted data for their training. For instance, generative AI tools like ChatGPT or DALL·E have been criticized for being trained on copyrighted books, art, and music. The legality of such data use remains a gray area.

India’s Copyright Act, 1957 does not yet provide guidance on whether data used to train AI infringes on the rights of original creators. This ambiguity calls for immediate legal reform to maintain a fair balance between innovation and copyright protection.


Impact on Employment and Labour Law: Automation driven by AI is already disrupting the job market. In customer service, banking, and manufacturing, AI tools are replacing human workers, leading to concerns about mass unemployment. Labour laws in India may need to evolve to deal with issues of displacement, upskilling, and worker protections. Governments could consider Universal Basic Income (UBI) models or mandatory reskilling initiatives to cushion the transition.


Ethical AI Principles for India: While laws take time, ethical guidelines can shape immediate best practices. NITI Aayog has proposed the following ethical principles:
Safety and reliability
Inclusivity and non-discrimination
Transparency and accountability
Privacy and security

Global Lessons


European Union’s AI Act: Categorizes AI by risk level. High-risk systems require strict oversight.
United States: No national law, but regulatory bodies like the FTC monitor AI for unfair practices.


China: Focused on content control and data sovereignty.


OECD Principles: Promote responsible, human-centric AI development.


Case Laws


1. Shreya Singhal v The Union of India case (2015): A pivotal ruling in which the Supreme Court upheld the right to free speech online. AI content moderation tools must respect this precedent.


2. Justice K.S. Puttaswamy v. Union of India (2017): Recognized privacy as a fundamental right. This decision influences the manner in which AI systems manage personal data.


3. Ryan v. Google Inc. (Ireland, 2019): Raised important questions about algorithmic targeting and user consent in digital ads.


4. Epic v. DHS (USA): Challenged government use of body-scanning tech run by algorithms. Highlighted concerns about transparency and misuse.


5. Proposed Cases by Scholars: Legal thinkers propose hypothetical cases (e.g., Zuboff v. Alphabet) to test accountability in AI monopolies.


Conclusion

India is standing at a legal crossroad. AI can drive incredible growth, but without adequate legal guardrails, it could also cause irreversible harm. To move forward, the Indian legal framework must:
Introduce an AI Law: A comprehensive act focusing on transparency, fairness, and safety.
Create a Central Regulator: Like SEBI or TRAI, a body that oversees all AI developments and use.
Mandate Algorithm Audits: Regular checks for biases and malfunction.
Clarify IP Rights: Define ownership of AI-created work.
Strengthen Ethical Norms: Build AI that respects human rights from design to deployment.
Getting this right can position India as a global leader in ethical AI. Ignoring it risks widespread public harm and economic chaos.


FAQS


Q1 Can AI be punished under Indian law?
No. Since AI isn’t a legal person, it cannot be punished. Responsibility lies with the human parties behind it.


Q2 Is AI regulation part of India’s future legal agenda? 
Yes. NITI Aayog has proposed AI guidelines and several committees have recommended AI-specific laws.


Q3. What sectors are most affected by AI?
Healthcare, finance, education, transportation, and the judiciary are seeing rapid AI integration.


Q4. How can we reduce AI bias?
By using diverse training data, regular audits, and creating diverse teams of developers.


Q5. Are there laws about AI in hiring?
Not directly, but biased hiring decisions could be challenged under anti-discrimination provisions in labor law.


Q6. Do people have a right to know how AI made a decision?
Ideally, yes. Transparency is crucial. The concept of explainable AI is gaining traction globally.


Q7. Can AI decisions be challenged in court?
Yes, especially if they violate fundamental rights or result in unfair outcomes.


Q8. Who owns AI-generated content?
According to existing legislation, intellectual property can only be owned by individuals. Therefore, ownership would belong to either the developer or the individual who initiated the AI’s use.


Q9. Is India doing enough to keep up with global AI trends?
India is taking initial steps, but still lags behind countries like the EU and China in comprehensive regulation.


Q10. How should law students and young lawyers prepare?
By learning about AI, technology law, data protection, and ethics. These will be essential areas of practice in the near future.

Leave a Reply

Your email address will not be published. Required fields are marked *