cross-posted from: https://programming.dev/post/3974080

Hey everyone. I made a casual survey to see if people can tell the difference between human-made and AI generated art. Any responses would be appreciated, I’m curious to see how accurately people can tell the difference (especially those familiar with AI image generation)

  • Sekoia@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    2
    ·
    1 year ago

    Personally, I have no issue with models made from stuff obtained with explicit consent. Otherwise you’re just exploiting labor without consent.

    (Also if you’re just making random images for yourself, w/e)

    ((Also also, text models are a separate debate and imo much worse considering they’re literally misinformation generators))

    Note: if anybody wants to reply with “actually AI models learn like people so it’s fine”, please don’t. No they don’t. Bugger off. https://arxiv.org/pdf/2212.03860.pdf here have a source.

    • Even_Adder@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      This paper is just about stock photos or video game art with enough dupes or variations that they didn’t get cut from the training set. The repeated images were included frequently enough to overfit. Which is something we already knew. That doesn’t really go to proving if diffusion models learn like humans or not. Not that I think they do.

      • Sekoia@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Sure, it’s not proof, but it gives a good starting point. Non-overfitted images would still have this effect (to a lesser extent), and this would never happen to a human. And it’s not like the prompts were the image labels, the model just decided to use the stock image as a template (obvious in the case with the painting).