Who’s Responsible When AI Goes Rogue? A Legal Look into Accountability in the Age of Artificial Intelligence




Author: Shaik Umarfarooq, Lovely professional university

To the Point
In India and across the world, legal systems were built with human actions in mind. Laws around negligence, liability, and responsibility are based on the idea that a human being makes the decision. But that foundation starts to crack when AI steps in. Unlike tools that simply follow commands, AI systems can learn, adapt, and act on their own. Sometimes, even the people who designed the AI can’t fully explain how it makes choices.
Let’s say an autonomous vehicle runs a red light and injures a pedestrian. If a human were behind the wheel, we’d talk about reckless driving or negligence. But if an algorithm was in control, is it still the driver’s fault? Or does the blame fall on the engineers, the manufacturer, or the company that put the car on the road? This isn’t just a philosophical question—it’s a legal one.
Our existing legal principles try to squeeze AI responsibility into traditional categories: torts, contracts, and product liability. But none fit perfectly. For negligence, proving fault usually means showing a person failed to act reasonably. But AI has no “intent” and no conscience. Product liability laws, meanwhile, focus on whether a product is defective. But AI isn’t a frozen tool—it keeps evolving. Even if it was safe at launch, it might behave unpredictably later. Contracts may assign responsibility in private agreements, but many AI tools—especially those used in public sectors—don’t operate under personal contracts with users. So when things go wrong, victims often have no clear route to justice.

Use of Legal Jargon
One of the biggest hurdles in AI accountability is the “black box” problem. This term refers to the fact that many complex AI systems, especially those using deep learning, operate in ways that even their own programmers cannot fully explain. This lack of transparency makes it incredibly difficult to determine who should be liable when things go wrong.
In India, our current laws are not equipped to tackle this. The Information Technology Act of 2000, for example, was written in a time when AI wasn’t a factor. Even the government’s National Strategy for Artificial Intelligence, introduced by NITI Aayog in 2018, offered guidelines and ethical suggestions—but stopped short of providing legal mechanisms for accountability.
There’s also no recognition of “autonomous agency” in Indian law. This means AI cannot be treated like a person who can be held responsible. Nor is there a concept of shared liability between AI and human supervisors. The lack of legal definitions creates a grey zone where victims often have no legal foothold, and companies can avoid responsibility.

The Proof
Globally, several regions are actively working to define AI liability frameworks. The European Union is leading the way with its proposed AI Act. This Act classifies AI systems based on their risk levels—low, medium, high—and sets obligations like human oversight, explainability, and data transparency. High-risk systems, such as those used in healthcare or law enforcement, would undergo stricter evaluations and audits.
In the United States, although no federal AI law exists, court cases have begun raising important questions. In the Loomis case, a judge relied on a risk assessment tool to determine a prison sentence. The defendant argued that because the algorithm’s logic was secret and unexplainable, it violated his rights. While the court didn’t ban the use of the AI tool, the case exposed the problems of using opaque systems in life-altering decisions.
Australia has also started discussing legal limits for public AI use. In Ryan v. Victoria, a predictive policing system mistakenly flagged someone. Although the court didn’t penalize the company that built the AI, it did note the urgent need for better regulation and oversight.
India has had smaller incidents that hint at a brewing storm. In 2021, media reports highlighted how facial recognition tools used by police wrongly identified individuals during protests and investigations. Since there was no regulation guiding its use, no responsibility was taken. Those falsely accused were left without legal remedy. This shows how AI, when used without checks, can quietly erode fundamental rights.
Furthermore, companies promoting “ethical AI” are mostly doing so voluntarily. These are not enforced standards. In a country like India, where public understanding of digital rights is still developing, leaving it to corporations to self-regulate is risky. Without laws, ethics become optional—and optional ethics rarely protect the vulnerable.






Abstract
Artificial Intelligence is no longer a futuristic concept. It’s part of our daily lives—whether we’re unlocking phones with facial recognition, asking Siri to play music, or relying on AI in hospitals and cars. But what happens when AI makes a serious error? Suppose a self-driving car hits someone, or a facial recognition system wrongly identifies a person—who should be held accountable? Is it the developer, the manufacturer, or the user? As AI gets smarter, these questions get harder. The law is now racing to keep up with a technology that changes how responsibility works.

Case Laws
1. Ryan v. Victoria (Australia)
Involved wrongful targeting by a police AI. The court stopped short of assigning blame but urged better legal supervision of AI systems.

2. United States v. Loomis (USA)
A controversial sentencing decision based on an AI tool. The court upheld the verdict, but the case spotlighted the dangers of opaque algorithms in the justice system.

3. Shreya Singhal v. Union of India (India)
While not about AI, this landmark case emphasized digital rights by striking down Section 66A of the IT Act. It shows that Indian courts are capable of adapting to tech issues when challenged.

Conclusion


India must move quickly to catch up. One way is by updating existing laws like the IT Act or Consumer Protection Act to include AI-specific clauses. Another option is to draft an entirely new law that directly addresses AI usage, risk levels, accountability, and compensation mechanisms.
Such a law should require companies to test AI systems rigorously before public use, ensure explainability for high-risk applications, and establish clear reporting and liability channels. Victims should have a way to file claims and receive justice without needing to decode the AI themselves.
Some academics have proposed giving AI legal status—making it an “electronic person” with its own limited responsibilities, similar to how companies are considered legal persons. While this idea is interesting, it’s not realistic for India right now. Our society and legal system are not yet ready for that level of abstraction.
Ultimately, technology should serve people—not harm them. Innovation must come with safeguards. A balanced approach that encourages growth while protecting public welfare is possible—but only if our legal system evolves alongside the machines it seeks to regulate.

FAQS


Q: Can I sue a company if an AI product causes harm?
Yes, but your claim will most likely fall under product liability or negligence laws, not AI-specific laws.

Q: Does India have any law directly covering AI-related accidents?
Not at the moment. Existing laws apply depending on the situation, but they aren’t designed with AI in mind.

Q: What happens if police use AI and it makes a mistake?
There is no standard procedure for victims. In many cases, the police department may take the blame, but the developer usually escapes accountability.

Q: Are AI developers legally required to ensure their systems are safe?
In India, no such legal duty exists yet. But some countries, like those in the EU, are introducing such responsibilities.

Q: Can AI be treated like a human or company in the eyes of law?
This is being debated globally. The idea of granting AI “legal personhood” exists, but India has not adopted or supported this concept so far.

Leave a Reply

Your email address will not be published. Required fields are marked *