Author: Ira Pal, Amity Law School, Amity University, Lucknow.
Abstract
The advent of artificial intelligence poses profound challenges and transformations to intellectual property law and workplace regulation. AI’s ability to generate creative works, innovate autonomously, and perform tasks traditionally done by humans raises legal questions: Who owns AI-generated works? What qualifies as inventorship or authorship when AI is involved? How do fair use doctrines respond to massive training data usage? In employment, AI systems alter tasks, monitoring, and decision-making, and may affect labour rights, privacy, discrimination, and employer responsibilities. This article examines doctrinal and case law developments globally, analyses legal proof and arguments, and concludes with recommendations.
To the Point
AI challenges the foundational IP concepts of authorship, inventorship, and ownership.
Courts are increasingly holding that human authorship remains essential for copyright; AI‐only generated works may be ineligible.
Use of copyrighted works for training AI triggers complex fair use.
In the workplace, AI impacts raise issues of worker rights, monitoring, bias, and liability when AI makes decisions.
Legal frameworks are lagging; both IP law and employment law need adaptation to address AI-driven change.
Background: AI, IP & Workplace
AI and Creative Output: Generative AI can produce art, text, inventions, and designs without continuous human supervision; potentially displacing or supplementing human creators.
Training Data: AI systems are trained on large datasets, often including copyrighted works. This raises questions of infringement vs permissible use.
Workplace Automation: AI tools used for hiring, performance monitoring, decision support, predictive analytics, and surveillance.
Statutory & Doctrinal Framework
Copyright Acts require originality and human authorship.
Patent Laws typically require a human inventor. AI cannot be a legal inventor.
Fair Use: doctrines permitting limited use of copyrighted material under certain conditions.
Labour Law: worker rights to privacy, protection from discrimination, fair wages, due process in termination; also, regulation of monitoring.
Use of Legal Jargon
These terms often arise in the AI-IP/workplace context:
Human Authorship: The requirement that a human being must contribute original, creative input to receive copyright protection.
Inventorship: The legal conception of an inventor under patent law; it often requires a human mind.
Fair Use / Fair Dealing: Exceptions in IP law permitting unlicensed use under certain conditions (purpose, amount, effect on market, transformative nature).
Transformative Use: Use that adds something new, with a further purpose or different character, altering the original with new expression or meaning.
Derivative Work: A new work based upon one or more pre-existing works. AI outputs may be derivative if they substantially derive from existing works.
Liability / Vicarious / Contributory Infringement: Legal responsibility for IP violations, even if one did not directly commit copying but facilitated or benefited.
Bias / Discrimination: In employment law, AI decisions may carry algorithmic bias, leading to statutory discrimination.
Surveillance / Monitoring / Data Privacy: Legal obligations and limits on employer use of AI to track performance or behaviour.
Case Laws & Legal Proof
Thaler v. Perlmutter, U.S. District Court, District of Columbia (2023)
Facts: Stephen Thaler sought copyright over artwork autonomously generated by his AI system.
Holding: The court held that works generated entirely by a computer system “absent any human involvement” are ineligible for copyright protection. It reaffirmed that “human authorship is a bedrock requirement\
Meta & AI Training / Authors’ Lawsuits
Authors sued Meta for training its Llama model on copyrighted works without permission. District Court Judge Vince Chhabria held that Meta’s use was transformative enough to qualify as fair use under US law.
Risk of Copyright Infringement in Machine Learning Systems
Academic literature summarises risks of training AI systems on copyrighted materials. It shows stakeholders face exposure depending on how data was obtained, how the model uses and outputs content, and whether outputs are substantially similar.
In Workplace Context: Employment & Ethics
Articles point out that workers’ rights, privacy, fair hours, and performance evaluation may be undermined by AI systems that monitor performance, assign tasks, or replace human judgment.
Issues
Here are the main legal issues and arguments:
Authorship & Ownership: Who owns copyright in AI output: the human who prompts, the developer of the AI, or no one? Is human involvement sufficient?
Patent / Inventorship: Can AI be an inventor? Most jurisdictions hold that inventorship must be by a human. The lack of AI as an inventor may prevent patentability.
Fair Use / Training Data: Whether using copyrighted works to train AI constitutes infringement or fits exceptions. Transformative nature, market effect, and amount used are critical factors.
Liability for Outputs: If AI output infringes IP, who is liable? The AI operator? The user? The developer?
Workplace Rights: Monitoring, job displacement, discrimination risk. Also, the regulation of AI decisions in hiring/promoting/firing.
Legal Proof / Reasoning
The human authorship requirement is grounded in statutes and case law: courts and IP offices have rejected works lacking human creative input. The concept of transformative use has been used by courts to justify training uses, balancing IP owner rights with innovation.
In workplace law, legal proof comes from comparative law, statutes protecting worker privacy, and anti-discrimination laws. Cases or articles show that over-reliance on AI decision-making may violate those rights.
Comparative Jurisprudence
In the US, the courts generally require human authorship for copyright. AI-only generated works are not granted protection.
In other jurisdictions, the law is less settled, with some policy proposals, but few clear cases. India’s IP framework is being critiqued for ambiguity in AI contexts.
Different countries vary in how they treat training data, the duty to compensate, or create licensing regimes for AI training.
Conclusion
AI is reshaping the legal landscape of both intellectual property and the workplace. The legal doctrine surrounding authorship and inventorship remains anchored in human creativity, meaning that AI-only works often fall outside of protection. However, AI’s training on copyrighted work is forcing courts to refine doctrines of fair use/fair dealing. In the workplace, AI offers efficiency but heightens risks of privacy intrusion, discrimination, and erosion of worker protections.
To adapt, legal systems must evolve:
clearer statutory definitions of authorship/inventorship in AI contexts.
licensing regimes for data used in training.
regulations on AI deployment in the workplace, including transparency, accountability, and auditability.
laws protecting worker rights in AI-driven workplaces.
Case Laws
Here is a summarised list of key cases:
Case
Jurisdiction
Main Principle / Holding
Thaler v. Perlmutter (2023)
U.S. District Court, D.C.
AI-only generated work without human authorship is ineligible for copyright.
Meta (authors vs. Meta)
U.S.
The use of copyrighted materials in training may be considered fair use if it is sufficiently transformative.
FAQs
Q1. Can AI be recognised as an author or inventor under current IP laws?
Answer: Generally, no. Current statutes in many jurisdictions require that authors or inventors be human persons. For example, in Thaler v. Perlmutter, the US District Court held that works autonomously generated by AI without human involvement are ineligible for copyright protection. Human authorship remains a bedrock requirement. Until law changes explicitly permit AI or grant rights to AI operators in such cases, AI cannot be a legal author or inventor under prevailing law.
Q2. What qualifies as sufficient human involvement in AI-generated works to secure IP protection?
Answer: The law does not yet universally define sufficient human involvement. Factors include human inputs or direction (prompting, editing, selection); supervision of the creative process, whether humans selected output among variants. If human acts go beyond minimal or routine adjustments and contribute creative originality, courts may accept authorship. But many works with only negligible human input may fail to qualify. The Thaler case emphasises that without meaningful human creative control, the work may be rejected for lack of human authorship.
Q3. How does fair use/training data doctrine address the use of copyrighted material by AI systems?
Answer: Fair use or fair dealing involves balancing factors: purpose of use, nature of the original work, amount and substantiality of portion used, and effect of use on the market value of the original. When AI uses copyrighted materials for training, courts examine whether usage is transformative and whether it harms the market for the original works. In Meta’s case, training was found sufficiently transformative, helping avoid infringement claims. But outcomes often depend heavily on facts: the type of data used, access, and whether output replicates the original content.
Q4. What are the legal risks for employers using AI in the workplace?
Answer: Employers deploying AI in work environments face several risks:
Privacy & Surveillance – monitoring employees may violate data protection laws or privacy rights.
Bias & Discrimination – algorithms may perpetuate or amplify biases, leading to unlawful discrimination under employment law
Liability for Decisions – when AI makes hiring/firing/promotion decisions, responsibility for adverse outcomes must be legally allocated
Worker Rights & Autonomy – risk that AI reduces human judgment, undermines dignity or job satisfaction
Regulatory Compliance – employers must ensure compliance with labour statutes, data protection laws, and AI regulation in their jurisdiction.
Policy Suggestions / Other Headings: Forward-Looking Concerns
Regulatory Reform: Legislatures should consider amending IP statutes to explicitly address AI-related authorship and inventorship, possibly allowing for AI-operator ownership in certain controlled settings.
Licensing & Data Rights: Creation of licensing frameworks for datasets used in AI training; possibly modelled on collective licensing or data trusts.
Transparency & Auditability: AI systems used in the workplace should be subject to regulations for explainability, human oversight and audit.
Workers’ Protection: Laws ensuring minimal transparency about AI monitoring, rights to contest AI-based decisions, and safeguards against job displacement.
International Harmonisation: Since AI and IP are global, differences across jurisdictions can lead to uncertainty; harmonising rules would help.
References
Thaler v. Perlmutter,
Meta Copyright Case, San Francisco District Court, Judge Vince Chhabria, 2025 — holding on transformative use for AI training data.
German, Daniel M., Copyright-related risks in the creation and use of ML/AI systems
Torrance, Andrew W., Bill Tomlinson, Training Is Everything: Artificial Intelligence, Copyright, and Fair Training
Legal Service India, AI Impacts on Intellectual Property and Innovation Challenges
Legal Service India, The Future of Work: The Impact of Artificial Intelligence On Employment and Ethics.
