• IrateAnteater@sh.itjust.works
    link
    fedilink
    arrow-up
    36
    arrow-down
    1
    ·
    6 days ago

    That’s exactly what I was thinking. And this is actually the first time I’ve heard of some use of LLMs that I may actually be interested in.

    • cm0002@lemmy.world
      link
      fedilink
      arrow-up
      55
      arrow-down
      3
      ·
      6 days ago

      Yea the anti-AI crowd on Lemmy tends to misplace their anger on all AI when a lot of their anger should be directed to the corporate BS shoving it everywhere and anywhere to make a profit and line go up

      • jerakor@startrek.website
        cake
        link
        fedilink
        arrow-up
        19
        arrow-down
        5
        ·
        6 days ago

        Nestle bottling water is bad, so my solution will be to never drink any water and make fun of people who do. This is how it always comes off to me.

        • arudesalad@sh.itjust.works
          link
          fedilink
          arrow-up
          5
          arrow-down
          1
          ·
          6 days ago

          The stuff that gets made fun of by most anti AI people is AI “art” that people try to argue is equivalent to real, human art.

          The main reason people hate AI in general is because nearly all models use data that was taken without permission of the owner of it.

          It isn’t equivalent to bottled water, it is equivalent to the chocolate industry, it isn’t essential, so I will wait until an AI that was trained ethically without stealing data is made and doesn’t try to replace human art.

          • jerakor@startrek.website
            cake
            link
            fedilink
            arrow-up
            3
            arrow-down
            3
            ·
            6 days ago

            That AI is the one you make or at least host. No one is going to host an online AI for you that is 100% ethical because that isnt profitable and it is very expensive.

            When you vilianize AI you normalize AI use as being bad. The end result is not people stopping use of AI it is people being more okay with using less ethical AI. You can see this with folks driving SUVs and big trucks. They intentionally pick awful choices because the fatigue of being wrong for driving a car makes them just accept that it doesn’t matter.

            It feels dumb, it is dumb, but is what happens.

            • arudesalad@sh.itjust.works
              link
              fedilink
              arrow-up
              5
              ·
              6 days ago

              Most people can’t host their own AI. The only AI most people are aware of and the models that are pushed in everyone’s face are the horrible ones. I think a blanket hatred for all AI is stupid but it isn’t stupid to assume an AI is unethical because it most likely is, especially if it is a commercial one tech bros are posting about on corporate social media.

              As long as more people aren’t being told about the possibility of ethical AI there will be a large group of people wishing for its failure, especially since it has ruined so many parts of the internet, with both a locally hosted model or a model like chatGPT.

              • jerakor@startrek.website
                cake
                link
                fedilink
                arrow-up
                3
                ·
                6 days ago

                I get it, but we should as a community try to be better than that.

                AI won’t fail. It already is past the point where failing or being a fad was an option. Even if we wanted to go backwards, the steps that were taken to get us to where we are with AI have burned the bridges. We won’t get 2014 quality search engines back. We can’t unshitify the internet.

      • Pennomi@lemmy.world
        link
        fedilink
        English
        arrow-up
        14
        arrow-down
        2
        ·
        6 days ago

        As always, technology isn’t the enemy, it’s the corporations controlling it that are. And honestly the freely available local LLMs aren’t too far behind the big ones.

        • lmuel@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          8
          ·
          6 days ago

          Well in some ways they are. It also depends a lot on the hardware you have of course. A normal 16GB GPU won’t fit huge LLMs.

          The smaller ones are getting impressively good at some things but a lot of them are still struggling when using non-English languages for example.

      • arudesalad@sh.itjust.works
        link
        fedilink
        arrow-up
        3
        ·
        6 days ago

        I am very strongly anti-AI, I think it has some legitimate uses that have probably saved and improved a lot of lives (like AlphaFold). My main problem (and most people’s main problem with it) is the way it has been trained with stolen data and art.

        Since I don’t know much about non-corporate AI I am interested to know how an open-source LLM just trained off of your bookmarks would work, I assumed it would still need to be trained off of stolen data still so it can form sentences as well as the more popular models but I may be wrong, maybe the volume of data needed for a system like that is small enough that it can just be trained off of data willingly donated to it? I doubt it though.

      • Mac@mander.xyz
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        3
        ·
        6 days ago

        the anti-AI crowd on Lemmy

        Wow, that’s a big net. Surely your comment is applicable to all your catch.
        Right?

          • Mac@mander.xyz
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            2
            ·
            6 days ago

            Yes, do tell me more about the tendencies of the crowd as a whole.

            • desktop_user@lemmy.blahaj.zone
              link
              fedilink
              arrow-up
              1
              ·
              6 days ago

              fluid dynamics are quite effective at that. And there seem to only be a few main talking points that they bring up (environmental, energy, training data, art, job loss, personal dislike, wealth concentration (probably better to just say economic, but it’s pretty much just this), ill fitting usages, or not understanding how the model works at a fundamental level), unless people think about something a lot they generally come up with similar arguments.