AI and Disputes: What does the future hold?

Authors: Divya Devadoss, Associate Director, Forensic Services
15/12/2025
woman presenting data

As businesses adopt generative AI and automation, courts worldwide are grappling with questions that did not exist a decade ago. From intellectual property battles to misrepresentation claims, it can be argued that AI is reshaping the disputes landscape. 

In this article, we look at recent disputes cases involving AI to understand the trends and determine how these issues may be dealt with in the future. As this is a new and emerging area of law, we have considered the global context to identify international trends, which will likely influence decisions in courts across the globe.

Intellectual property disputes are surging 

Generative AI relies on vast datasets, often including copyrighted works. This has triggered a wave of lawsuits from rights holders seeking to protect their intellectual property. Courts are now tasked with applying decades-old laws to cutting-edge technology.

Getty Images v Stability AI (UK)

In Getty Images (US) Inc v Stability AI Ltd [2025] EWHC 38 (Ch), Getty accused Stability AI of using its licensed images without permission to train AI models. The High Court faced over 60 issues, including:

  • trademark infringement
  • copyright infringement
  • database right infringement.

On 4 November 2025, the Court largely rejected Getty’s claims, ruling that AI models do not store exact copies of works, and therefore this does not constitute a copyright breach. This judgement is highly likely to be appealed and may go all the way to the Supreme Court.

Hollywood vs AI (US)

In an ongoing case, major studios, including Disney, Universal, and Warner Bros, have filed suits against AI platforms like MiniMax and Midjourney in U.S. courts. Allegations include:

  • direct and secondary copyright infringement
  • public display of protected characters
  • distribution of infringing outputs.

Plaintiffs seek damages of up to USD 150,000 per work and permanent injunctions. Midjourney argues fair use and claims its platform is a tool for user creativity, not a direct infringer. These cases remain unresolved, but they highlight a growing tension between content owners and AI innovators.

Bartz v Anthropic PBC (US)

This landmark case marks the first major U.S. ruling on how fair use applies to GenAI. Three authors sued Anthropic, alleging that their copyrighted books were copied to train its model, Claude. The claim focused on inputs, not outputs.

In June 2025, Judge Alsup granted Anthropic partial summary judgment, holding that scanning lawfully purchased books for AI training was 'spectacularly transformative' and akin to human learning—thus protected under fair use. However, the use of pirated copies will proceed to trial, signalling that U.S. courts may draw a bright line between lawful and unlawful training data.

This ruling is controversial: can predictive models truly be equated with human learning?

Key takeaway: IP litigation is accelerating, but outcomes vary widely. Courts are beginning to distinguish between lawful and unlawful data sourcing, and the concept of “transformative use” is becoming central to AI disputes.

Breach of Contract and Misrepresentation Risks 

AI cannot sign contracts or bear legal responsibility under English law. If an AI tool miscalculates an invoice, sends false information, or “hallucinates” unrealistic promises, liability falls on the business, not the technology provider. 

Tyndaris SAM v VWM (UK)

Tyndaris marketed an AI-powered investment system as capable of predicting market sentiment using real-time data. VWM invested in this and lost USD 22 million. Tyndaris sued for unpaid fees; VWM counterclaimed for misrepresentation. The case settled, but it raised critical questions.

  • How should AI systems be tested before deployment?
  • What level of human oversight is appropriate?
  • How do you draft contracts for AI-driven services?

More disputes like this are to be expected as businesses rely on AI for high-stakes decisions.

Moffatt v Air Canada (Canada)
Air Canada’s chatbot misinformed a passenger about bereavement fare eligibility. The tribunal ruled that Air Canada was liable for negligent misrepresentation, rejecting Air Canada’s arguments that the chatbot was a separate legal entity. This case underscores a key principle: if your AI gives incorrect or misleading advice, you are responsible.

Privacy and Data Security Challenges

AI systems process vast amounts of personal data, making privacy compliance essential. Breaches and misuse can lead to class actions and regulatory penalties - especially in healthcare, HR, and consumer tech.

Even with robust protocols, breaches happen. Smaller AI startups often lack resources for compliance, creating fertile ground for litigation.

Key takeaways 

AI litigation is still in its early stages, but the trend lines are clear. The most common areas of dispute are:

  • intellectual property infringement
  • breach of contract
  • misrepresentation
  • privacy and data security violations.

To mitigate risk:

  • audit your AI tools for compliance and accuracy
  • update contracts to allocate AI-related liabilities
  • test thoroughly before making performance claims
  • invest in data security.

AI promises efficiency and innovation, but it also introduces new legal complexities. Courts are beginning to draw boundaries between lawful and unlawful training data, between fair use and infringement, and between human and machine accountability. Businesses that anticipate these challenges and act proactively will be best positioned to thrive in the AI-driven future.

We expect to see an increase in intellectual property claims and breach of contract and misrepresentation claims. One thing is for certain - our courts will need to adapt in order to answer the new questions that AI is presenting. 


We have a specialised disputes team with a range of experience, from commercial litigation to shareholder disputes. For advice on disputes, contact our Forensic Services team or your usual Crowe contact. 

 

Contact us


Alex Houston
Alex Houston
Partner, Forensics ServicesLondon

Insights

ESG disputes have doubled since 2020, greenwashing, climate strategy, and human rights are driving global litigation.