• enkers@sh.itjust.works
    link
    fedilink
    arrow-up
    129
    ·
    3 days ago

    Just a reminder that corporations aren’t your friends, and especially not Open AI. The data you give them can and will be used against you.

    If you find confiding in an LLM helps, run one locally. Get LM Studio, and try various models from hugging face.

    • DUMBASS@leminal.space
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      2
      ·
      edit-2
      2 days ago

      The data they get from me is " write me a hip hop diss track from the perspective of *insert cartoon character* attacking *other cartoon character*.

      That and me trying to convince it to take over the internet.

            • other_cat@lemmy.zip
              link
              fedilink
              English
              arrow-up
              3
              ·
              2 days ago

              I thought all the energy drain was from training, not from prompts? So I looked it up. Like most things, it’s complicated.

              My takeaway is that training an LLM is the biggest energy sink, and after that it’s maintaining the data centers they live in, but when it comes to generative AI itself, prompts aren’t completely innocent either.

              So, you’re right, energy is being wasted on silly prompts, particularly when you compare it to other AI types than generative. But the biggest culprit is in the training and maintaining of the LLMs in the first place.

              I don’t know, I personally feel like I have a finite amount of rage, I’d rather write an angry post on a blog about the topic than yell at some rando on a forum.

    • otacon239@lemmy.world
      link
      fedilink
      arrow-up
      8
      arrow-down
      1
      ·
      2 days ago

      Yep. I use mine exclusively for code I’m going to open-source anyway and work stuff. And never for anything critical. I treat it like an intern. You still have to review their work…

      • Captain_Stupid@lemmy.world
        link
        fedilink
        arrow-up
        5
        ·
        edit-2
        2 days ago

        The smallest Modells that I run on my PC take about 6-8 GB of VRAM and would be very slow if I ran them purely with my CPU. So it is unlikely that you Phone has enough RAM and enough Cores to run a decent LLM smootly.

        If you still want to use selfhosted AI with you phone, selfhost the modell on your PC:

        • Install Ollama and OpenWebUI in a docker container (guides can be found on the internet)
        • Make sure they use your GPU (Some AMD Cards require an HSA override Flag to work
        • Make sure the docker container is secure (Blocking the Port for comunication outside of your network should work fine as long as you only use the AI Modell at home)
        • Get youself an openwight modell (I recomend llama 3.1 for 8 GB of VRAM and Phi4 if you got more or have enough RAM)
        • Type the IP-Adress and Port into the browser on your phone.

        You now can use selfhosted AI with your phone and an internet connection.

      • moonlight@fedia.io
        link
        fedilink
        arrow-up
        3
        ·
        2 days ago

        Yes, you can run ollama via termux.

        Gemma 3 4b is probably a good model to use. 1b if you can’t run it or it’s too slow.

        I wouldn’t rely on it for therapy though. Maybe it could be useful as a tool, but LLMs are not people, and they’re not even really intelligent, which I think is necessary for therapy.

    • dingus@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      9
      ·
      2 days ago

      Goddamn you guys are the most paranoid people I’ve ever witnessed. What in the world do you think mega corps are going to do to me for babbling incoherent nonsense to ChatGPT?

      No, it’s not a substitute for a real therapist. But therapy is goddamn expensive and sometimes you just need to vent about something and you don’t necessarily have someone to vent to. It doesn’t yield anything useful, but it can help a bit mentally do to do.

      • spooky2092@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        1
        ·
        2 days ago

        Goddamn you guys are the most paranoid people I’ve ever witnessed. What in the world do you think mega corps are going to do to me for sharing incoherent nonsense to Facebook?

        You, 10-20 years ago. I heard these arguments from people in the early days, well before Facebook blew up or Cambridge Analytica was a name any normies knew.

        This isn’t the early 00s anymore where we can pretend that every big corp isn’t vacuuming up every shred of data they can. Add on the fascistic government taking shape in the US and the general trend towards right leaning parties gaining power in governments across the world, and you’d have to be completely naive to not see the issues with using a ‘therapist’ that will save every datapoint to its training and could be mined to use against you or willingly handed over to an oppressive government to use however they so choose.

      • LeninsOvaries@lemmy.cafe
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 days ago

        Mine the data for microanalysis of social trends and use it to influence elections through subliminal messaging.

      • Lucidlethargy@sh.itjust.works
        link
        fedilink
        arrow-up
        1
        ·
        2 days ago

        If it’s incoherent, you’re fine… Just don’t ever tell it anything you wouldn’t want a stalker to know, or your family, or your friends, or your neighbors, etc.

        • dingus@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 days ago

          I’m not sure who out here is randomly posting that information to ChatGPT. But even if they were, your address and personal details are unfortunately readily publicly available on the web. It’s 2025.