Apparently, stealing other people’s work to create product for money is now “fair use” as according to OpenAI because they are “innovating” (stealing). Yeah. Move fast and break things, huh?

“Because copyright today covers virtually every sort of human expression—including blogposts, photographs, forum posts, scraps of software code, and government documents—it would be impossible to train today’s leading AI models without using copyrighted materials,” wrote OpenAI in the House of Lords submission.

OpenAI claimed that the authors in that lawsuit “misconceive[d] the scope of copyright, failing to take into account the limitations and exceptions (including fair use) that properly leave room for innovations like the large language models now at the forefront of artificial intelligence.”

  • Pup Biru@aussie.zone
    link
    fedilink
    arrow-up
    2
    ·
    11 months ago

    you know how the neurons in our brain work, right?

    because if not, well, it’s pretty similar… unless you say there’s a soul (in which case we can’t really have a conversation based on fact alone), we’re just big ol’ probability machines with tuned weights based on past experiences too

    • ParsnipWitch@feddit.de
      link
      fedilink
      arrow-up
      2
      ·
      11 months ago

      “Soul” is the word we use for something we don’t scientifically understand yet. Unless you did discover how human brains work, in that case I congratulate you on your Nobel prize.

      You can abstract a complex concept so much it becomes wrong. And abstracting how the brain works to “it’s a probability machine” definitely is a wrong description. Especially when you want to use it as an argument of similarity to other probability machines.

      • Pup Biru@aussie.zone
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        11 months ago

        “Soul” is the word we use for something we don’t scientifically understand yet

        that’s far from definitive. another definition is

        A part of humans regarded as immaterial, immortal, separable from the body at death

        but since we aren’t arguing semantics, it doesn’t really matter exactly, other than the fact that it’s important to remember that just because you have an experience, belief, or view doesn’t make it the only truth

        of course i didn’t discover categorically how the human brain works in its entirety, however most scientists i’m sure would agree that the method by which the brain performs its functions is by neurons firing. if you disagree with that statement, the burden of proof is on you. the part we don’t understand is how it all connects up - the emergent behaviour. we understand the basics; that’s not in question, and you seem to be questioning it

        You can abstract a complex concept so much it becomes wrong

        it’s not abstracted; it’s simplified… if what you’re saying were true, then simplifying complex organisms down to a petri dish for research would be “abstracted” so much it “becomes wrong”, which is categorically untrue… it’s an incomplete picture, but that doesn’t make it either wrong or abstract

        *edit: sorry, it was another comment where i specifically said belief; the comment you replied to didn’t state that, however most of this still applies regardless

        i laid out an a leads to b leads to c and stated that it’s simply a belief, however it’s a belief that’s based in logic and simplified concepts. if you want to disagree that’s fine but don’t act like you have some “evidence” or “proof” to back up your claims… all we’re talking about here is belief, because we simply don’t know - neither you nor i

        and given that all of this is based on belief rather than proof, the only thing that matters is what we as individuals believe about the input and output data (because the bit in the middle has no definitive proof either way)

        if a human consumes media and writes something and it looks different, that’s not a violation

        if a machine consumes media and writes something and it looks different, you’re arguing that is a violation

        the only difference here is your belief that a human brain somehow has something “more” than a probabilistic model going on… but again, that’s far from certain

    • Phanatik@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      11 months ago

      You are spitting out basic points and attempting to draw similarities because our brains are capable of something similar. The difference between what you’ve said and what LLMs do is that we have experiences that we are able to glean a variety of information from. An LLM sees text and all it’s designed to do is say “x is more likely to appear before y than z”. If you fed it nonsense, it would regurgitate nonsense. If you feed it text from racist sites, it will regurgitate that same language because that’s all it has seen.

      You’ll read this and think “that’s what humans do too, right?” Wrong. A human can be fed these things and still reject them. Someone else in this thread has made some good points regarding this but I’ll state them here as well. An LLM will tell you information but it has no cognition on what it’s telling you. It has no idea that it’s right or wrong, it’s job is to convince you that it’s right because that’s the success state. If you tell it it’s wrong, that’s a failure state. The more you speak with it, the more fail states it accumulates and the more likely it is to cutoff communication because it’s not reaching a success, it’s not giving you what you want. The longer the conversation goes on, the more crazy LLMs get as well because it’s too much to process at once, holding those contexts in its memory while trying to predict the next one. Our brains do this easily and so much more. To claim an LLM is intelligent is incredibly misguided, it is merely the imitation of intelligence.