Author: Mehak Verma, Indian Institute of Management Rohtak
TO THE POINT
AI is changing how intellectual property (IP) law works by raising new questions about who owns what is created by artificial tools, software, or machines. When AI uses large amounts of copyrighted material to learn or creates content on its own, it becomes difficult to decide who should get credit or protection under the law. Currently, copyright only applies if a human has made a meaningful contribution, and patents can only be given to real people, as seen in some of the cases, like the patent case. Recent lawsuits show that using copyrighted data without permission for training AI is not protected under ‘fair use’ or ‘fair dealing,’ which means developers can be held legally responsible. To solve these issues, laws should clearly define what counts as human input, give rights to people or companies who own or control the AI, and require transparency, such as revealing where the AI’s training data comes from and adding watermarks to generated content. Protecting artists’ and creators’ rights is also important, so laws should ensure that people are given credit and their styles are not copied without permission. This way, the law can keep up with fast-growing AI technology. But the other side of the coin is that AI can help with opportunities for the intellectual property to grow as well, so we can see that it is a boon as well as a bane.
USE OF LEGAL JARGON
Fair Use (US Doctrine)- A legal defense allowing limited use of copyrighted material without permission for purposes like criticism, education, or research.
Fair Dealing (Indian and UK Law)- Similar to fair use, it permits limited use of copyrighted content under specific conditions.
Contractual Vesting- Transferring of legal rights (e.g., patent or copyright) through a contractual agreement.
Regulatory Bodies- Government or statutory agencies like DPIIT that enforce and regulate specific areas of law.
Watermarking- A technology used to embed information in a digital file to signify ownership or origin.
Collective Tribunals- Legal or industry bodies that manage licensing and rights enforcement for groups of creators.
Vicarious Liability- Legal responsibility of one party (e.g., company or developer) for the actions of another (e.g., AI model or user).
THE PROOF
In the context of India, the concept of authorship and subsistence under copyright law (Copyright Act, 1957) centers around originality and human creation.. The law does not yet explicitly recognize AI as an author. Thus, purely AI-generated works without meaningful human input are unlikely to enjoy protection under Indian copyright law. While the UK’s CDPA 1988 recognizes a deemed author in computer-generated works, India lacks similar statutory clarity.
Similarly, in patent law, India, like many jurisdictions (UK and US), follows a human-centric model of inventorship under the Patents Act, 1970. The courts have not yet dealt with AI inventorship, but going by international jurisprudence (such as the DABUS case), AI cannot be treated as an inventor. However, ownership rights may be vested in the AI’s developer or user through licensing or assignment. The Thomson Reuters v. Ross ruling from the U.S. is instructive here, though Indian courts are yet to rule on similar disputes. Cases like Getty Images v. Stability AI and the BBC’s objection to unauthorized data scraping also highlight this growing global concern.
In India, ownership and licensing of AI-generated works are currently undefined, creating ambiguity. In the absence of specific contracts, rights in such works might not be enforceable, leaving stakeholders without clear recourse for remedy. Embedding standard licensing terms in user agreements, especially for platforms operating in India, is essential to plug this gap. As for implementation, digital tools are increasingly being used to detect and prevent intellectual property infringement, including methods like watermarking and metadata tracking. These technologies could be adopted in India under the IT Act, 2000 framework, along with existing copyright enforcement provisions. Regulatory bodies like the Department for Promotion of Industry and Internal Trade (DPIIT) can play a role by mandating watermarking or disclosure requirements for digital content.
ABSTRACT
This discussion looks at how artificial intelligence (AI) is creating new challenges for traditional intellectual property (IP) laws, especially in areas like authorship, inventorship, infringement, ownership, and enforcement. As AI becomes more advanced and starts generating content, inventing new things, and using large amounts of data, some of which may be protected by copyright, it raises tough questions about who really owns the results and who should get credit or legal protection. For example, if a piece of music or art is made by an AI, can anyone claim to be the author? Or if an AI system creates a new invention, can it be patented, and if so, who gets the rights? These issues are already being tested in courts around the world, showing that our current laws are often unclear or outdated. At the same time, this situation also brings new opportunities. By improving licensing systems, making legal rules clearer, and using technology to track and protect intellectual property, such as watermarking AI-generated content, we can create fairer systems for both innovators and original creators.
CASE LAWS
DABUS cases in different countries: Courts in the US, UK, EU, and elsewhere have all agreed that AI cannot be named as an inventor on a patent. As a result, patent applications listing AI as the inventor were rejected.
Thomson Reuters v. Ross Intelligence (2025, US): The court ruled that using copyrighted content to train AI without permission is not protected under “fair use.” It was considered copyright infringement.
Getty Images v. Stability AI (UK): Getty accused Stability AI of using and copying their images without permission for training. The court may issue an early ban (interlocutory injunction), and the final decision could change how companies handle image licensing.
CONCLUSION
In conclusion, to deal with the growing challenges that AI brings to intellectual property (IP), we need clear and balanced legal rules. Lawmakers should define what counts as enough human effort for someone to be called the author of AI-generated work. Only real people should be listed as inventors, but the rights to use or earn from the invention can go to the person or company that owns or operates the AI. There should also be rules that require companies to clearly show where their AI training data comes from, and licenses should be easier to get through proper systems. Online platforms must include basic rules in their user agreements, like who owns the AI-created content and using watermarks to track the content. To help enforce these rights, technology tools like watermarking and record-keeping should be used, and courts must be more careful in allowing “fair use” excuses, especially when AI is used for business. Creators’ personal rights, like being given credit or stopping their style from being copied by AI, should be protected. IP authorities should offer clear guidance on AI-related content and training data. Courts should also keep in mind today’s technology and ethical concerns while deciding such cases. Lastly, artists, companies, and the public should work together to make sure the laws are fair, updated, and support both creativity and innovation in the digital age
FAQS
Q: Does AI-generated content receive automatic copyright?
A: No, copyright only exists when there is enough human creativity involved. Courts and copyright offices do not give protection to work made entirely by machines without human input.
Q: Can AI be the inventor of a patent?
A: No patent law mandates natural-person inventorship, per DABUS jurisprudence; rights may increase to human owner/operator via contractual vesting.
Q: Is using copyrighted content to train AI considered fair use?
A: Typically, no Thomson Reuters held that unlicensed headnotes cannot be used commercially; similar claims are pending in Getty, BBC, Disney suits.
Q: How can creators protect against AI misuse?
A: Use contractual licenses, watermark outputs, join collective tribunals, assert moral-right claims, and monitor via AI-detection tools.
Q: What is the legal outlook?
A: Expect legislation clarifying AI authorship, dataset licensing mandates, transparency regulations (e.g. EU AI Act), and expanded moral-rights protections.