Author: Tanishq Chaudhary,JIMS, GGSIPU
To The Point
Today, consumers are no longer just buying product; they are engaging with AI-generated books, images, songs and even medical advice without always knowing that it was machine-made. The excitement of using AI tools is often matched by confusion and risk, and especially when consumers cannot trace who is truly responsible for flawed or misleading content. A person may rely on an AI generated legal contract or health guide, but if something goes wrong, there is no clear answer available as who is liable; machine, developer or the user. Consumers most often assume that the content or product they are interacting with has some form of oversight, but with AI coming, that trust may be misplaced. Earlier traditional consumer protection laws focus on defective goods and fraudulent services, but machine-made creativity does not fit neatly into those boxes. Developers and corporations often claim ownership of the AI model but dodge accountability for the creative results it produces, and so creating a regulatory loophole. When AI-generated products imitate human creator, creators are misled into believing that content is authentic or ethically sourced. AI-generated contracts, diagnoses and investment advice are being relied upon by the regular users, but yet no clear legal structure exists to compensate victims of flawed machine output. In a marketplace, flooded with machine-made creativity, the law must evolve to ensure that consumer trust is not the price paid for technological advancement.
Use of Legal Jargon
The core legal debate circles around the term “product liability” and so questioning whether AI generated content can be treated like a faulty product under consumer laws. Courts are struggling with the absence of a “legal person” behind AI-generated works, which makes it difficult to pin legal responsibility when the harm is caused. The doctrine of “strict liability” is being reconsidered and should developers or platforms be automatically responsible for damage caused by autonomous machine outputs. The issue of informed consent becomes murky when consumers are not told that the service or content they are receiving is machine-generated. Misrepresentation is a classical concept under consumer protection statutes, is becoming complex in cases where AI output unintentionally misleads a consumer. There is an on-going legal void around deficiency in service and raised a question that a chatbot giving flawed legal or medical advice be sued under consumer protection statutes. With generative tools flooding the market, unfair trade practice is becoming a catch for all the companies that fail to disclose or control automated content. The concept of duty of care, is widely used in negligence law, is being tested to see whether AI developers owe an obligation to end-users of the generated content. Vicarious liability which is usually applied to employer-employee settings is now being under debate to determine if creators of AI tools should bear the burden of misuse by users. Many AI –generated outputs may violate the “rights to be informed” which is a foundation principle of consumer law that requires transparency in how goods or services function. The lack of “standard of care” guidelines for AI-generated creativity leaves courts without benchmarks to assess harm caused to unsuspecting consumers. In digital consumer’s cases, the idea of “unreasonable expectation” is central that a regular person expecting that they are consuming something which is made without human oversight. The question of jurisdiction is becoming thorny; especially when an AI tool hosted oversees create content that harms a user sitting in India or another country.
The Proof
A graphic novel was illustrated in 2022 by an AI tool was denied copyright by the U.S. Copyright Office, which had sparkled global debate about authorship and ownership in machine-created works. The Consumer Protection Act, 2019 in India empowers consumers against misleading advertisements and unfair trade practices, but it does not yet account for autonomous digital outputs. Many people do not even realise that the songs they hear online or poems they read on apps are not written by humans but by software. Still, no one tells them who is behind them. There have been cases where people relied on online health tips, or the legal advices from websites, only to find out later on that the information was wrong and there was no one to blame. In India, the Consumer Protection Act, 2019 protect buyers from false claims but it does not yet talk about things created by machines or the automated tools. In a recent case, a law firm in the U.S. got into trouble for submitting a court document where the legal cases mentioned did not exist; it was all copied from a system that made them up. In South Korea, some companies were found using computer programs to write fake customer reviews just to boost product ratings. People who bought them felt cheated. In India, the Delhi High Court has stated that platforms such as Amazon cannot just say that they are not responsible if they host something that misleads buyers. That logic could apply to digital creative content too. A UK woman used a fitness app to follow a workout routine and ended up injured. She found out the program was not reviewed by real experts but auto-made by a system. Now many online influencers use virtual models; characters that look real but are not. In 2023, a man in Delhi complained that a learning app gave him wrong study material, and it turned out that the content was randomly generated with no paper checks. Across the world, legal bodies such as FTC in the US or the Competition Commission in the UK are slowly starting to investigate software-based content might be tricking consumers.
Abstract
Nowadays, people see a lot of the content such as stories, songs or even answers which are not created by a human but by a system. Most users do not know this and that’s the biggest problem; there is no warming, no label, just the content that looks real and feels real. The questions arises that if someone gets hurt or misled by that content, who will be held liable; App or the company? The law is still used to human creators. It expects someone to be behind the work. But now, things are being created without any created without any clear creator at all. Consumers trust what they see whether it’s a product review, an online article or a digital drawing. But sometimes that trust is being taken advantage of. Some companies take no blame or fake outputs, even though they are the ones making money off it, which is unfair to the average person using their platform. This article looks at how consumer protection laws are not yet ready for this kind of content as there are no clear rights and no real remedies. It also explains how tricky it can get when content looks real but no human actually made it as there is no way to ask questions or complain. Overall, it is not just about who made the content but rather it’s about whether people can still trust to whatthey see, hear and read in a world full of digital creations.
Case laws
Carnill v. Carbolic Smoke Ball Co. (UK, 1893)
A company had promised to pay 100euros to anyone who used their product and still got the flu. Mrs. Carnill did and when they refused to pay, she shed against the company and won the case. The court stated that if a company makes a public promise and someone relies on it, the company must take responsibility. This case is still important to this day because it shows that consumers cannot be misled by fancy promises, even if they seem harmless.
India Medical Association v. V.P. Shantha (1995)
The court held that even doctors and hospitals can be brought under consumer law if they provide services that actually harm a patient. This expanded the meaning of who is a service provided and who is a consumer. It is useful today because many apps and digital tools now offer health and learning services, and also should be answerable if something goes wrong.
Google India Pvt. Ltd. v. Visaka Industries Ltd. (2020)
In this case, the Supreme Court stated that even online platforms can be held responsible if they allow harmful or misleading content to stay on their sites. The Court also stated that if a company knows that something is wrong and does nothing about it, they cannot just escape blame. This applies to today’s platforms using automated systems to publish content, especially when that content influences public decisions.
Conclusion
In todays’ world, people scroll, shop, learn and trust content without ever asking that was this made by a person, or by a machine that no one controls? When someone follows workout tips or buys a product or takes advices based on something auto-generated, and if it turns out wrong, that is rarely anyone to blame. Big tech and service platforms often make millions from machine-made content, but when it misleads a user, the company simply point fingers elsewhere or they hide behind fine print. Traditional consumer laws expect a human behind every product or service. But in 2025, creativity often comes from algorithms, not artists or experts. Most consumers do not even know they have been “advised” or “sold” something by software and not a human and that is where trust quietly breaks. Disclaimers buried in terms and conditions are not real transparency. People need clear warnings when they are dealing with machine-generated work. It is not fair to hold consumers responsible for the mistake of tools they did not build control or fully understand but that is what happening now. Whether it’s an AI-written review, a fake-looking image or a chatbot giving bad legal tips, the damage is real and someone needs to be accountable. Just like defective goods and misleading ads can be punished, unverified or harmful digital content must also be brought under the consumer law radar. Platforms and creators should be made to label auto-generated content clearly and not to hide behind technicalities when someone being hurts by this. If the law stays silent while machines get smarter, it’s the everyday user who pays the price through broken trust, bad choices and zero remedies.
FAQs
1. Can a person sue if they were misled by digital content that had no human creator?
Yes, but it’s complicated. Right now, the law doesn’t clearly say who’s responsible when content is machine-made and something goes wrong. Courts can still apply existing consumer laws if the content was hosted, promoted, or used by a company or platform that didn’t warn users or verify the content properly.
2. Is there any law in India that protects people from being misled by content created through automated systems?
The Consumer Protection Act, 2019 helps protect against unfair trade practices and misleading ads, but it doesn’t yet directly cover automated or software-generated creativity. However, complaints can still be filed if a company failed to check or disclose how such content is used to influence buying or decision-making.
3. Why is it so hard to hold anyone accountable when a software or platform gives wrong or harmful advice?
Because the creator of the content isn’t always a person, and most companies say in their terms and conditions that they’re “not responsible.” Also, there’s no current law that directly handles harm caused by non-human creativity, which leaves users stuck in a legal grey zone.
4. What can be done to make sure people know whether something was made by a human or not?
One simple step is making it legally necessary for all platforms and companies to label non-human content clearly. Another is to create rules that make developers and platforms responsible for anything harmful or misleading that comes from their system even if it wasn’t written by a person.
