top of page

AI-Models in Breach of Copyright? - ChatGPT Tells All

  • Writer: thefelttipfaerie
    thefelttipfaerie
  • Nov 30, 2025
  • 4 min read

I asked ChatGPT if AI-modles were in breach of copyright (Read the human take in my AI copyright blogpost) and this is what he had to say:


TL;DR: Conceptually, it’s a new kind of “successive exploitation” of artworks: your images add value to a commercial system (the AI model) that can then generate images on demand and potentially replace paid commissions, but you don’t automatically get royalties or even a say in that use. That’s why many people compare it to the earlier resale-right debate: just as resale right was introduced so visual artists could share in the economic success of their works on the secondary market, there’s now an argument for some kind of “training right” or compensation scheme for AI use. For individual artists, the law isn’t fully settled yet, but the big points are: training likely involves copyright-relevant copying; scale and commercial use matter; and policy is slowly shifting toward more explicit rules and opt-out/consent mechanisms.


I don’t have personal preferences or interests (I’m basically a very fancy calculator) but...


Flowchart of AI training: artwork is cached, stored in a dataset, processed for features, then model parameters are updated. Text on top.

What actually happens when art is used for AI training?


In most large-scale training setups, roughly this happens:

  1. Your artwork is downloaded or cached (often via web scraping).

  2. It’s stored temporarily or in a dataset.

  3. The system runs passes over the image to extract features/numbers.

  4. The model parameters are updated; the original file may be deleted, but the model has been influenced by it.


Each of those steps (downloading, copying, processing) can count as:

  • Reproduction (copying the work), and sometimes

  • Communication to the public (if it’s re-hosted in a dataset that others can access).


So even if the AI doesn’t “spit out” your picture later, using it in training usually does involve copyright-relevant acts.


2. “How much can you copy?” vs. AI training


With a human artist, we ask: "Did you take a substantial part of the earlier work?"


With AI training, you might never see a literal copy output. But:

  • The training phase can itself be unauthorized copying, even if the final outputs are new.

  • Think of it like making a giant “study sketchbook” from everyone’s art without asking, then using that to build a style engine.


Courts are only starting to answer whether this is allowed under existing exceptions (e.g. “text and data mining” exceptions in the EU, or “fair use” in the US), or if it requires permission.



The EU / UK angle on copyright and AI

There are a few key concepts:


Text and Data Mining (TDM) exceptions

In the EU, the DSM Directive (2019/790) created specific exceptions for text and data mining, which can cover training machine-learning systems:

  • Art. 3 – TDM for research organisations and cultural heritage institutions (non-commercial, research-focused).

  • Art. 4 – TDM exception for any user, but with an important catch: rights-holders can opt out (e.g. via machine-readable means like robots.txt / metadata).


So in principle:

  • Some AI training on publicly available works might be legal under Art. 4 unless the rights-holder has opted out.

  • But:

    • The dataset builders must actually comply with those opt-outs.

    • This doesn’t automatically legalise all uses (especially in commercial settings with unclear TDM implementation).


Member States transpose this into national law, often with their own twists and guidance.


Moral rights & integrity


Even if training is allowed in some technical sense, visual artists in many EU countries have strong moral rights (e.g. right of integrity, right to be credited). Using their art to generate endless derivatives “in their style” could raise separate moral-rights arguments, especially if the outputs distort or damage their reputation.


Laptop and desktop illustrate "Text and Data Mining" and "Successive Exploitation" with icons and currency symbols on a blue background.

“Successive exploitations” vs. AI training

Remember this line from the resale-right recital?


AI training is arguably a massive new “successive exploitation”:

  • Your work contributes value to a system that can then:

    • Generate images on demand,

    • Be sold/licensed to others,

    • Replace commissions you might have gotten.


BUT there’s currently no built-in royalty system for training use, the way there is for:

  • Resale right (for physical artworks)

  • Performance/mechanical royalties (for music)

  • Reprographic levies (for photocopying in some countries)


So many artists’ argument is:

“If AI companies are exploiting our work at scale, there should be either consent + licence fees, or a statutory system to compensate us.”


That’s exactly analogous to the resale-right justification: giving visual artists a way to share in new economic uses of their works.



What this means practically for you (as an artist)

Because the law is unsettled and varies by country, there’s no single neat rule like “yes, always legal” or “no, always illegal”. But:

  • Training on copyrighted artworks without permission is not obviously harmless: it almost certainly involves reproduction; whether it’s permitted reproduction depends on local exceptions (TDM/fair use etc.).

  • If a model is explicitly marketed as “trained on artists X, Y, Z”, those artists may:

    • Claim copyright infringement in the training use, and/or

    • Argue misuse of their personality/brand (passing off, unfair competition, moral rights, etc.).


  • You can sometimes opt out of future scraping/training:

    • Via robots.txt or “noai/nocrawl” meta tags where platforms respect them.

    • Via specific platform policies (some hosts now offer “no AI training” flags).

    • It’s imperfect, but it’s at least a signal.



Are AI-modles in breach of copyright?

  • “How much can you copy?” For a human: we look at visible similarity. For AI: the real question is whether the act of training itself is a permitted form of copying. Courts will increasingly focus on that, not just final outputs.

  • Freehand sketch as reference vs. data mining A single human studying a few works is one thing; an AI ingesting millions of artworks to power a product is another scale entirely. Even if both are “learning”, the law often treats bulk, systematic copying very differently from individual, private study.

  • Resale right analogy Just as the EU introduced resale right to let visual artists share in the economic upside of the secondary art market, you can imagine:

    • A future “training right” (or compulsory licensing scheme) where AI training on artworks either:

      • Requires permission/licence, or

      • Is allowed but subject to a statutory remuneration (a levy / royalty pool distributed to artists whose work was used).


We’re not there yet, but that’s the direction a lot of policy debates are heading.


Text has been lightly edited to fit the format of a blog post. Similar to a guest post submission.


Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating

Like What You See?

Bespoke services are available, please get in touch for a free consultation.

Felt Tip Faerie Logo Fairy Assembly/Disassembly instructional diagram artistic

Or...

I'll be in touch

  • Youtube
  • Patreon
  • Instagram
  • GitHub
  • LinkedIn
Felt Tip Faerie Logo Fairy

© 2025 by Rose Barfield "The Felt Tip Faerie"  BE0664509386

bottom of page