• RGB3x3@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    7
    ·
    edit-2
    6 months ago

    Are there any guarantees that harmful images weren’t used in these AI models? Based on how image generation works now, it’s very likely that harmful images were used to train the data.

    And if a person is using a model based on harmful training data, they should be held responsible.

    However, the AI owner/trainer has even more responsibility in perpetuating harm to children and should be prosecuted appropriately.

    • Eezyville@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      3
      ·
      6 months ago

      And if a person is using a model based on harmful training data, they should be held responsible.

      I will have to disagree with you for several reasons.

      • You are still making assumptions about a system you know absolutely nothing about.
      • By your logic anything born from something that caused suffering from others (this example is AI trained on CSAM) the users of that product should be held responsible for the crime committed to create that product.
        • Does that apply to every product/result created from human suffering or just the things you don’t like?
        • Will you apply that logic to the prosperity of Western Nations built on the suffering of indigenous and enslaved people? Should everyone who benefit from western prosperity be held responsible for the crimes committed against those people?
        • What about medicine? Two examples are The Tuskegee Syphilis Study and the cancer cells of Henrietta Lacks. Medicine benefited greatly from these two examples but crimes were committed against the people involved. Should every patient from a cancer program that benefited from Ms. Lacks’ cancer cells also be subject to pay compensation to her family? The doctors that used her cells without permission didn’t.
        • Should we also talk about the advances in medicine found by Nazis who experimented on Jews and others during WW2? We used that data in our manned space program paving the way to all the benefits we get from space technology.
      • PotatoKat@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        3
        ·
        6 months ago

        The difference between the things you’re listing and SAM is that those other things have actual utility outside of getting off. Were our phones made with human suffering? Probably but phones have many more uses than making someone cum. Are all those things wrong? Yea, but at least good came out of it outside of just giving people sexual gratification directly from the harm of others.

      • aceshigh@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        14
        ·
        6 months ago

        The topic that you’re choosing to focus on really interesting. what are your values?

        • Eezyville@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          6 months ago

          My values are none of your business. Try attacking my arguments instead of looking for something about me to attack.

    • aesthelete@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      edit-2
      6 months ago

      Are there any guarantees that harmful images weren’t used in these AI models?

      Lol, highly doubt it. These AI assholes pretend that all the training data randomly fell into the model (off the back of a truck) and that they cannot possibly be held responsible for that or know anything about it because they were too busy innovating.

      There’s no guarantee that most regular porn sites don’t contain csam or other exploitative imagery and video (sex trafficking victims). There’s absolutely zero chance that there’s any kind of guarantee.