• arymandias@feddit.de
      link
      fedilink
      arrow-up
      6
      arrow-down
      2
      ·
      1 year ago

      With large language models it will basically be a technocratie of prompt hackers, which are at least humans and thus have a stake in Humanity.

      • jarfil@lemmy.world
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        1 year ago

        LLMs spit out language that can be used as a prompt… no need for the middleman.

  • JohnDClay@sh.itjust.works
    link
    fedilink
    arrow-up
    10
    ·
    1 year ago

    The whole point of asimov’s laws of robotics was things can go wrong even if a system adhered to them perfectly. And current AI attempts doesn’t even have that.

  • Blapoo@lemmy.ml
    link
    fedilink
    arrow-up
    5
    ·
    1 year ago

    I honestly ponder if an LLM trained on every human on earth’s input once a month about their opinions on the world and what should be done to fix it would have a “normalized trend” in that regard.

    LLMBOT 9000 2024!

    • FMT99@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      Haven’t read ol Bob since the 2000s. Gotta say it didn’t age as poorly as most others from that era.