• theneverfox@pawb.social
    link
    fedilink
    English
    arrow-up
    4
    ·
    8 months ago

    This isn’t a new thing, people have gone off alone into this kind of nonsensical journey for a while now

    The time cube guy comes to mind

    There’s also temple OS written in holy C, he was close to some of the stuff in the article

    And these are just two people functional and loud enough to be heard. This is a thing that happens, maybe LLMs exacerbate a pre existing condition, but people have been going off the deep end like this long before LLMs came into the picture

      • theneverfox@pawb.social
        link
        fedilink
        English
        arrow-up
        2
        ·
        8 months ago

        I agree, it’s certainly not going to help people losing touch. But that’s not what worries me - that’s a small slice of the population, and models are beginning to get better at rejection/assertion

        What I’m more worried about is the people who are using it almost codependently to make decisions. It’s always there, it’ll always give you advice. Usually it’s somewhat decent advice, even. And it’s a normal thing to talk through decisions with anyone

        The problem is people are offloading their thinking to AI. It’s always there, it’s always patient with you… You can literally have it make every life decision for you.

        It’s not emotional connection or malicious AI I worry about… You can now walk around with a magic eight ball that can guide you through life reasonably well, and people are starting to trust it above their own judgement