• DarkenLM@kbin.social
    link
    fedilink
    arrow-up
    3
    ·
    9 months ago

    It’s extremely hard to give a machine a sense of morality without having to manually implement it on every node that constitutes their network. Current LLMs aren’t even aware of what they’re printing out, let alone understand the moral implications from that.

    The day a machine is truly aware of the morality of what they say, in addition to actually understanding it, then we truly have AI. Currently, we have gargantuan statistical models that people glorify into nigh-godhood.

    • VirtualOdour@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      9 months ago

      That idiots arguing against for some reason expect to be a God, reasonable people, and its creators call it a tool with many uses - none of which are making moral judgments.

      It’s very much ai and it’s absolutely world changing to the same extent the internet and the computer and electricity have been, it’s not sentient of course that’s a level probably above agi. It’s important to remember that we use biological intelligence to describe behaviors which do not require complex moral judgements or self awareness - spiders have intelligence but do they have moral principles or complex reasoning?

      Hopefully people will develop an understanding of how to use these tools effectively and stop expecting them to be magic but there’s still a large population who belive playing cards are magic or the stars position from our reference point on a spinning earth dictate their lives and loves so honestly I don’t hold out hope - it’ll be cool when online shopping search functions use it to show what you’re actually looking for tho