Author: Abdul Rehman, B.C.T College of Law, New Panvel
Abstract
This article examines the landmark case of Moffatt v. Air Canada, 2024 BCCRT 149, where a Canadian tribunal held an airline legally responsible for misinformation provided by its artificial intelligence (AI) chatbot. The case establishes a critical precedent that a company cannot shield itself from liability by blaming the errors of its automated agents on the technology itself. The tribunal ruled that the chatbot’s representations were considered those of the airline, forming a binding term of the contract. This decision underscores the legal principle that principals are liable for the acts of their agents, a doctrine now firmly applied to AI interfaces, with significant implications for corporate risk management and consumer protection.
To the point
1. Factual Matrix
Following a family member’s death, Jake Moffatt sought to book a flight with Air Canada. Prior to purchasing a ticket, he interacted with a chatbot on the airline’s website. He inquired about bereavement fares and asked specifically if they could be applied retroactively after booking. The chatbot provided incorrect information, stating that he could apply for a refund within 90 days of the ticket issuance. Relying on this advice, Mr. Moffatt purchased a full-fare ticket and subsequently submitted a bereavement fare application. Air Canada denied his claim, pointing to the actual policy on its website, which stated that bereavement fares must be approved before travel. The airline refused to honour the refund as promised by its chatbot.
Legal Proceedings
Mr. Moffatt filed a claim with British Columbia’s Civil Resolution Tribunal (CRT), a tribunal with jurisdiction over small claims disputes. Air Canada’s primary defence was that the chatbot was a “separate legal entity” for whose actions the airline should not be held vicariously liable. The airline argued that Mr. Moffatt should have verified the information by consulting the human-readable terms and conditions elsewhere on the website.
The tribunal rejected this defence, applying established legal principles to a novel technological context:
Negligent Misrepresentation: The tribunal found that Air Canada owed a duty of care to provide accurate information to customers through its website, including its chatbot. The information provided was inaccurate, and it was reasonable for Mr. Moffatt to rely on it, causing him a financial loss.
Contractual Incorporation by Representation: The key finding was that the chatbot’s advice became a term of the contract. By providing specific information during the pre-purchase inquiry that induced the customer to buy the ticket, the representation was deemed incorporated into the agreement between the parties. The airline was estopped from reneging on a term that formed the basis of the customer’s decision to contract.
Vicarious Liability and Agency: The tribunal firmly stated that the chatbot was Air Canada’s agent. It was integrated into the airline’s website and acted for the airline’s benefit. The principle of qui facit per allium facit per se (he who acts through other acts himself) applies. A corporation, being an artificial person itself, can only act through its agents whether human or algorithmic. Therefore, the airline is responsible for the chatbot’s actions and representations.
The Judgment
The CRT ordered Air Canada to pay Mr. Moffatt the difference between the full fare and the discounted bereavement fare, plus pre-judgment interest and CRT fees. The tribunal held that it was unreasonable to expect a consumer to double-check information from an official company agent against a separate policy document.
Case Laws
Liability for AI-Driven: Actions Courts in the United States and the European Union are grappling with questions of liability when AI systems cause harm or infringe rights. For example, the U.S. case Thomson Reuters v. ROSS Intelligence rejected fair use defences for AI training data, emphasizing the protection of copyrighted legal texts used without authorization [3]. Similarly, the EU is considering frameworks that assign liability to developers and users of AI, balancing innovation with accountability.
Intellectual Property Concerns: Landmark decisions such as New York Times v. OpenAI highlight disputes over copyright infringement when AI models are trained on copyrighted news articles without permission. Courts are increasingly scrutinizing whether AI training constitutes fair use or copyright violation, with recent rulings in the U.S. and UK indicating a trend towards stricter enforcement and clearer boundaries for AI training data use.
Data Protection and Privacy: Indian statutes like Sections 43A and 66E of the Information Technology Act, along with the pending Personal Data Protection Bill, reflect the importance of safeguarding personal data processed by AI. International cases underscore the need for transparency and consent in AI data handling, aligning with India’s constitutional protections under Article 21, which guarantees privacy as a fundamental right [4].
Ethical and Procedural Safeguards: The Colombian Constitutional Court’s ruling emphasizes the non-substitution of human judgment in judicial processes involving AI, advocating for transparency, human oversight, and ethical AI deployment. This case sets a precedent for India to incorporate human-in-the-loop principles in AI applications within judicial and administrative domains.
Conclusion
The Moffatt v. Air Canada case is a seminal ruling in the law of technology and AI. It decisively negates the “my AI did it” defence, confirming that businesses bear the legal risk for the outputs of their customer-facing automated systems. Companies can no longer deploy AI tools as convenient but inaccurate information sources while disclaiming all responsibility for their output. The judgment reinforces that the legal doctrines of agency, misrepresentation, and contractual incorporation are robust enough to adapt to technological advancements. It serves as a stark warning to corporations: ensuring the accuracy and reliability of AI interfaces is not just a matter of customer service, but a critical legal obligation.
Frequently Asked Questions (FAQs)
Q1: Does this mean companies are always liable for everything their AI says?
Not necessarily in all contexts. This case involved a specific, transactional interaction where the customer relied on the AI’s advice to form a contract. Liability would be assessed differently for, say, general conversational remarks not intended to induce a legal agreement. However, the core principle of accountability stands.
Q2: What is “negligent misrepresentation”?
It is a tort that occurs when a party carelessly makes an untrue statement, the recipient reasonably relies on it, and suffers a loss as a result. The tribunal found that Air Canada was negligent in programming or monitoring its chatbot, leading to the misrepresentation.
Q3: What is “vicarious liability”?
This is a legal doctrine that holds one party (the principal, like Air Canada) responsible for the wrongful acts of another (the agent, like the chatbot) if the agent was acting within the scope of its authority.
Q4: How can companies protect themselves from similar liability?
Companies cannot simply rely on disclaimers. Proactive measures are required, including:
Rigorous testing and ongoing monitoring of AI systems for accuracy.
Implementing clear “circuit-breakers” where the AI directs complex or high-stakes queries to a human agent.
Ensuring that the AI’s knowledge base is synchronized with the company’s official terms and conditions.
Training AI models on accurate and up-to-date corporate policies.
Q5: Is this a binding precedent on other courts?
As a decision from a provincial-level tribunal, it is not a binding precedent on higher courts like a supreme court decision would be. However, it is a highly persuasive and well-reasoned case that other courts and tribunals, especially in common law jurisdictions, are likely to follow when faced with similar facts. It signals a clear judicial trend.
