Those claiming AI training on copyrighted works is “theft” misunderstand key aspects of copyright law and AI technology. Copyright protects specific expressions of ideas, not the ideas themselves. When AI systems ingest copyrighted works, they’re extracting general patterns and concepts - the “Bob Dylan-ness” or “Hemingway-ness” - not copying specific text or images.

This process is akin to how humans learn by reading widely and absorbing styles and techniques, rather than memorizing and reproducing exact passages. The AI discards the original text, keeping only abstract representations in “vector space”. When generating new content, the AI isn’t recreating copyrighted works, but producing new expressions inspired by the concepts it’s learned.

This is fundamentally different from copying a book or song. It’s more like the long-standing artistic tradition of being influenced by others’ work. The law has always recognized that ideas themselves can’t be owned - only particular expressions of them.

Moreover, there’s precedent for this kind of use being considered “transformative” and thus fair use. The Google Books project, which scanned millions of books to create a searchable index, was ruled legal despite protests from authors and publishers. AI training is arguably even more transformative.

While it’s understandable that creators feel uneasy about this new technology, labeling it “theft” is both legally and technically inaccurate. We may need new ways to support and compensate creators in the AI age, but that doesn’t make the current use of copyrighted works for AI training illegal or unethical.

For those interested, this argument is nicely laid out by Damien Riehl in FLOSS Weekly episode 744. https://twit.tv/shows/floss-weekly/episodes/744

  • mriormro@lemmy.world
    link
    fedilink
    English
    arrow-up
    43
    arrow-down
    8
    ·
    12 days ago

    You know, those obsessed with pushing AI would do a lot better if they dropped the patronizing tone in every single one of their comments defending them.

    It’s always fun reading “but you just don’t understand”.

    • FatCrab@lemmy.one
      link
      fedilink
      English
      arrow-up
      19
      arrow-down
      11
      ·
      12 days ago

      On the other hand, it’s hard to have a serious discussion with people who insist that building a LLM or diffusion model amounts to copying pieces of material into an obfuscated database. And then having to deal with the typical reply after explanation is attempted of “that isn’t the point!” but without any elaboration strongly implies to me that some people just want to be pissy and don’t want to hear how they may have been manipulated into taking a pro-corporate, hyper-capitalist position on something.

      • mriormro@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        ·
        12 days ago

        I love that the collectivist ideal of sharing all that we’ve created for the betterment of humanity is being twisted into this disgusting display of corporate greed and overreach. OpenAI doesn’t need shit. They don’t have an inherent right to exist but must constantly make the case for it’s existence.

        The bottom line is that if corporations need data that they themselves cannot create in order to build and sell a service then they must pay for it. One way or another.

        I see this all as parallels with how aquifers and water rights have been handled and I’d argue we’ve fucked that up as well.

        • VoterFrog@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          3
          ·
          edit-2
          12 days ago

          They do, though. They purchase data sets from people with licenses, use open source data sets, and/or scrape publicly available data themselves. Worst case they could download pirated data sets, but that’s copyright infringement committed by the entity distributing the data without the legal authority.

          Beyond that, copyright doesn’t protect the work from being used to create something else, as long as you’re not distributing significant portions of it. Movie and book reviewers won that legal battle long ago.

        • FatCrab@lemmy.one
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          3
          ·
          12 days ago

          Training data IS a massive industry already. You don’t see it because you probably don’t work in a field directly dealing with it. I work in medtech and millions and millions of dollars are spent acquiring training data every year. Should some new unique IP right be found on using otherwise legally rendered data to train AI, it is almost certainly going to be contracted away to hosting platforms via totally sound ToS and then further monetized such that only large and we’ll funded corporate entities can utilize it.

        • FatCrab@lemmy.one
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          3
          ·
          12 days ago

          I have no personal interest in the matter, tbh. But I want people to actually understand what they’re advocating for and what the downstream effects would inevitably be. Model training is not inherently infringing activity under current IP law. It just isn’t. Neither the law, legislative or judicial, nor the actual engineering and operations of these current models support at all a finding of infringement. Effectively, this means that new legislation needs to be made to handle the issue. Most are effectively advocating for an entirely new IP right in the form of a “right to learn from” which further assetizes ideas and intangibles such that we get further shuffled into endstage capitalism, which most advocates are also presumably against.

          • yamanii@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            12 days ago

            I’m pretty sure most people are just mad that this is basically “rules for thee but not for me”, why should a company be free to pirate but I can’t? Case in point is the internet archive losing their case against a publisher. That’s the crux of the issue.

            • FatCrab@lemmy.one
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              3
              ·
              11 days ago

              I get that that’s how it feels given how it’s being reported, but the reality is that due to the way this sort of ML works, what internet archive does and what an arbitrary GPT does are completely different, with the former being an explicit and straightforward copy relying on Fair Use defense and the latter being the industrialized version of intensive note taking into a notebook full of such notes while reading a book. That the outputs of such models are totally devoid of IP protections actually makes a pretty big difference imo in their usefulness to the entities we’re most concerned about, but that certainly doesn’t address the economic dilemma of putting an entire sector of labor at risk in narrow areas.