• untakenusername@sh.itjust.works
    link
    fedilink
    arrow-up
    1
    ·
    30 days ago

    Actually please don’t use chatgpt for tharapy, they record everything people put in there to use to further train their ai models. If you wanna use ai for that use one of those self-hosted models on ur computer or something, like those from ollama.com.

    • Pup Biru@aussie.zone
      link
      fedilink
      English
      arrow-up
      1
      ·
      29 days ago

      don’t do that either… llms say things that sound reasonable but can be incredibly damaging when used for therapy. they are not therapists

  • enkers@sh.itjust.works
    link
    fedilink
    arrow-up
    1
    ·
    1 month ago

    Just a reminder that corporations aren’t your friends, and especially not Open AI. The data you give them can and will be used against you.

    If you find confiding in an LLM helps, run one locally. Get LM Studio, and try various models from hugging face.

      • moonlight@fedia.io
        link
        fedilink
        arrow-up
        1
        ·
        30 days ago

        Yes, you can run ollama via termux.

        Gemma 3 4b is probably a good model to use. 1b if you can’t run it or it’s too slow.

        I wouldn’t rely on it for therapy though. Maybe it could be useful as a tool, but LLMs are not people, and they’re not even really intelligent, which I think is necessary for therapy.

    • dingus@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      Goddamn you guys are the most paranoid people I’ve ever witnessed. What in the world do you think mega corps are going to do to me for babbling incoherent nonsense to ChatGPT?

      No, it’s not a substitute for a real therapist. But therapy is goddamn expensive and sometimes you just need to vent about something and you don’t necessarily have someone to vent to. It doesn’t yield anything useful, but it can help a bit mentally do to do.

      • Lucidlethargy@sh.itjust.works
        link
        fedilink
        arrow-up
        0
        ·
        30 days ago

        If it’s incoherent, you’re fine… Just don’t ever tell it anything you wouldn’t want a stalker to know, or your family, or your friends, or your neighbors, etc.

        • dingus@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          30 days ago

          I’m not sure who out here is randomly posting that information to ChatGPT. But even if they were, your address and personal details are unfortunately readily publicly available on the web. It’s 2025.

    • Lucidlethargy@sh.itjust.works
      link
      fedilink
      arrow-up
      1
      ·
      30 days ago

      Yes, this is a massive problem with them these days. They have some information if you’re willing to understand they WILL lie to you, but it’s often very frustrating to seek meaningful answers. Like, it’s not even an art form… It’s gambling.

  • Captain_Stupid@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    1 month ago

    If you use Ai for therapie atleast selfhost and keep in mind that its goal is not to help you but to have a conversation that statisvies you. You are basicly talking to a yes-man.

    Ollama with OpenWebUi is relativly easy to install, you can even use something like edge-tts to give it a Voice.

    • Robust Mirror@aussie.zone
      link
      fedilink
      arrow-up
      0
      ·
      30 days ago

      Therapy is more about talking to yourself anyway. A therapists job generally isn’t to give you the answers, but help lead you down the right path.

      If you have serious issues get an actual professional, but if you’re mostly just trying to process things and understand yourself or a situation better, it’s not bad.

        • Robust Mirror@aussie.zone
          link
          fedilink
          arrow-up
          0
          ·
          30 days ago

          I assumed you had 2 points, the self hosting point about what you’re saying now, and

          “keep in mind that its goal is not to help you but to have a conversation that statisvies you. You are basicly talking to a yes-man.”

          about its ability to be a good therapist or not in general. I was responding to that. Sorry if I misunderstood.

          • Captain_Stupid@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            29 days ago

            It’s alright, my second point was more something to keep in mind and not an acual argument against using AI for therapie.

      • Pup Biru@aussie.zone
        link
        fedilink
        English
        arrow-up
        1
        ·
        29 days ago

        to lead you down the right path, yes… llms will lead you down an arbitrary bath, and when that path is biased by your own negative feelings it can be incredibly damaging

  • Lucidlethargy@sh.itjust.works
    link
    fedilink
    arrow-up
    1
    ·
    30 days ago

    This is a severely unhealthy thing to do. Stop doing it immediately…

    ChatGPT is incredibly broken, and it’s getting worse by the day. Seriously.

  • AItoothbrush@lemmy.zip
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 month ago

    And then i just have the stupidest shit ever, mostly trying to gaslight chatgpt into agreeing with me about random stuff thats actually incorrect. Btw psa: please never use ai for school or work, it produces slop and acts like a cruch that youre going to start relying on. Ive seen it so many times in the people around me. Ai is like a drug.

      • AbsentBird@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        30 days ago

        Since studying machine learning I’ve become a lot less opposed to AI as a concept and specifically opposed to corporate/cloud LLMs.

        Like a simple on-device model that helps turn speech to text isn’t something to be opposed, it’s great for privacy and accessibility. Same for the models used by hospitals for assistive analysis of medical imaging, or to remove background noise from voice calls.

        People don’t seem to think of that as ‘AI’ anymore though, it’s like these big corporations have colonized the term for their buggy wasteful products. Maybe we need new terminology.

    • TronBronson@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      1 month ago

      To be fair, it is actually quite useful from a business standpoint. I think it’s a tool that you should understand. It can be a crutch but it can also be a pretty good assistant. It’s like any other technology you can adopt.

      They said the same thing about Wikipedia/internt in the early 2000’s and really believed you should have to go to a library to get bonafide sources. I’m sure that’s long gone now judging by literacy rates. You can check the AI’s sources just like a wiki article. Kids are going to need to understand the uses, and drawbacks of this technology.

      • AItoothbrush@lemmy.zip
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        The problem is, you cant check the ai’s sources in most cases. Id also say blindly trusting wikipedia and the internet is a huge problem nowadays. Wikipadia only has a few dozen instances of there being mass manipulation of facts but for example twitter, tiktok, etc are a huge breeding ground for misinformation. So no you shouldnt blindly rely on wikipedia/internet the same way you shouldnt rely on ai. Also the other thing is, if every time you search the internet you kill one turtle then eveey question asked to an ai is like killing a thousand…

        • TronBronson@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          1 month ago

          Usually it will pull up sources when asked. “ChatGTP you said “x…” could you please provide a source for that information”

          • Swedneck@discuss.tchncs.de
            link
            fedilink
            arrow-up
            1
            ·
            2 days ago

            maybe chatgpt magically has that actually work but when i’ve tried it they all just cite a nonsense source that looks valid, but either isn’t relevant or doesn’t even exist at all.