The Algorithm’s Authorship: Navigating the Legal Labyrinth of Generative AI and Intellectual Property Rights

Author : K. PRANAI DEEPAK RAO

Osmania University Post Graduate College of Law

Abstract 

This article examines the burgeoning conflict between Generative Artificial Intelligence (GenAI) and established Intellectual Property (IP) frameworks. As AI systems like Midjourney, DALL-E, and GPT-4 produce works that mimic human creativity, the legal world faces a crisis of “authorship.” The core of the debate rests on whether a non-human entity can be an “author” and whether the training of these models on copyrighted data constitutes “fair use” or wholesale infringement. This analysis explores the shift from human-centric copyright laws to a potential sui generis system of protection for AI-generated content, analyzing current judicial trends and the necessity for global regulatory harmonization.

To the Point:

The fundamental legal conflict is two-fold:

  1. Input Phase: Whether using copyrighted works to “train” AI models without consent is a violation of the copyright holder’s exclusive rights.
  2. Output Phase: Whether the resulting AI-generated work can be granted copyright protection.

Currently, the prevailing legal consensus in most jurisdictions (notably the US and India) is that copyright requires human authorship.

Therefore, works generated solely by AI are falling into the public domain, creating a “legal vacuum” for companies investing millions in AI-generated assets. The challenge is to balance the promotion of technological innovation with the protection of human intellectual labour.

Use of Legal Jargon 

To accurately analyze this issue, the following legal doctrines are applied:

  • Sui Generis: A Latin term meaning “of its own kind.” In this context, it refers to the creation of a unique, new category of law specifically for AI, rather than trying to fit AI into existing copyright laws.
  • Fair Use / Fair Dealing: The legal doctrine that permits limited use of copyrighted material without acquiring permission from the rights holder (e.g., for criticism, news reporting, or research).
  • Transformative Use: A key factor in “Fair Use” analysis; it asks whether the new work adds something new, with a further purpose or different character, altering the original work with new expression, meaning, or message.
  • De Minimis: A legal doctrine meaning “too small to be concerned with,” used here to argue that small snippets of training data used by AI are too insignificant to constitute infringement.
  • Work-for-Hire: A doctrine where the employer, not the creator, is considered the legal author of a work.
  • Moral Rights: The rights of an author to be attributed as the creator and to protect the integrity of their work from distortion.

The Proof 

The evidence of this legal tension is found in the operational mechanics of Large Language Models (LLMs) and Diffusion Models:

  1. The Training Set: AI models are trained on “Common Crawl” and other massive datasets containing billions of images and texts. The “proof” of infringement lies in the “latent space” of the AI, where it remembers patterns of copyrighted styles (e.g., “in the style of Greg Rutkowski”).
  2. The Human-AI Interface: The US Copyright Office (USCO) has issued guidance stating that if an AI does the “creative heavy lifting,” the human providing the prompt is not the author. For example, a simple prompt like “A cat in a hat” does not constitute “creative control” sufficient for copyright.
  3. The EU AI Act: The European Union has taken a proactive step by requiring “transparency” in training data, forcing AI companies to disclose what copyrighted materials were used effectively creating a legal trail for future infringement lawsuits.


Case Laws 

The judiciary is currently shaping this law through several landmark disputes:

  1. Thaler v. Perlmutter (2023): Stephen Thaler attempted to register a piece of AI-generated art (“A Creative Force”) as the author. The US District Court affirmed the USCO’s decision, ruling that 
    “human authorship is a bedrock requirement of copyright.” 
    This case establishes that AI cannot be a legal “person” for the purposes of IP.
  2. Andersen v. Stability AI et al. (Ongoing): A class-action lawsuit brought by artists against Stability AI and Midjourney. The plaintiffs argue that the AI models are “derivative works” because they are trained on copyrighted images. This case will likely decide the “Input Phase” legality of GenAI.
  3. The New York Times Co. v. Microsoft & OpenAI (Ongoing): This is a pivotal case regarding “Transformative Use.” The NYT argues that GPT-4 does not just learn from their articles but competes with them by providing verbatim summaries, thus failing the “Fair Use” test.

Conclusion 

The current legal framework is designed for the printing press and the paintbrush, is ill-equipped for the era of the algorithm. For centuries, copyright law has been anchored in the ‘Romantic Author’ theory,
the belief that a work of art is the externalization of a human’s internal spirit, emotion, and conscious intent. 

However, Generative AI decouples the ‘creative output’ from ‘human consciousness.’ When an algorithm produces a masterpiece based on a mathematical probability of pixels, the traditional requirement of a ‘creative spark’ becomes an obsolete metric. To move forward, the law must evolve beyond the binary of ‘Human Author vs. Public Domain.’ This rigid dichotomy creates a dangerous legal vacuum: if AI-generated works are automatically relegated to the public domain, the commercial incentive for industries to integrate AI into professional creative workflows will diminish, as their outputs would be instantly stealable without legal recourse. Conversely, granting full copyright to AI-generated works would lead to a ‘copyright land-grab,’ where corporations use AI to flood the market with millions of protected works, effectively stifling human creativity through sheer volume. The solution lies in the creation of a sui generis (unique) legal category, a ‘Third Way.’ This would involve a tiered system of protection: granting full copyright to purely human works, a limited ‘neighbouring right’ or ‘computational right’ to AI-assisted works (with shorter protection terms), and leaving purely autonomous AI outputs to the public domain.

By shifting the focus from who created the work to how the work was created and who invested in its production, the law can protect the economic viability of AI innovation while safeguarding the sacred dignity of human authorship.

Suggested Way Forward:

  • Introduction of Sui Generis Rights: Creating a “neighboring right” for AI-generated content that offers a shorter term of protection (e.g., 10–20 years instead of the lifetime + 70 years of human copyright).
  • Compulsory Licensing Models: Establishing a system where AI companies pay a statutory fee into a fund for creators whose works are used in training sets.
  • The “Human-in-the-Loop” Standard: Refining the legal definition of “substantial human contribution” to determine exactly how much a human must edit an AI output to qualify for copyright.

Ultimately, the law must protect the incentive for humans to create while allowing the utility of AI to flourish.

FAQs


1. Can I copyright a book if I wrote it with the help of ChatGPT? 

It depends. If you used AI for brainstorming and outlining but wrote the prose yourself, yes. If the AI wrote the chapters based on your prompts, the AI-generated portions cannot be copyrighted, though your unique arrangement and structure might be.

2. What is “Prompt Engineering” in the eyes of the law? 

Currently, the law views “prompting” as giving a set of instructions, similar to a client telling an artist what to paint. The “artist” (the AI) creates the work, and the “client” (the prompter) does not automatically own the copyright.

3. Is it legal for AI to learn from my art on the internet? 

This is the “million-dollar question” currently being debated in courts. AI companies argue it is “Fair Use” (learning patterns), while artists argue it is “theft” (copying pixels).

4. Will AI eventually replace human authors legally? 

Legally, AI cannot replace humans because it lacks “legal personality” ,
it cannot sign contracts, be sued in court, or own property.
However, it may replace the commercial value of certain types of human labour.

Leave a Reply

Your email address will not be published. Required fields are marked *