• V0ldek@awful.systems
    link
    fedilink
    English
    arrow-up
    21
    ·
    edit-2
    7 months ago

    I’m not even going to engage in this thread cause it’s a tar pit, but I do think I have the appropriate analogy.

    When taking certain exams in my CS programme you were allowed to have notes but with two restrictions:

    1. Have to be handwritten;
    2. have to fit on a single A4 page.

    The idea was that you needed to actually put a lot of work into making it, since the entire material was obviously the size of a fucking book and not an A4 page, and you couldn’t just print/copy it from somewhere. So you really needed to distill the information and make a thought map or an index for yourself.

    Compare that to an ML model that is allowed to train on data however long it wants, as long as the result is a fixed-dimension matrix with parameters that helps it answer questions with high reliability.

    It’s not the same as an open book, but it’s definitely not closed book either. And the LLMs have billions of parameters in the matrix, literal gigabytes of data on their notes. The entire text of War and Peace is ~3MB for comparison. An LLM is a library of trained notes.

    • EatATaco@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      11
      ·
      7 months ago

      My question to you is how is it different than a human in this regard? I would go to class, study the material, hope to retain it, so I could then apply that knowledge on the test.

      The ai is trained on the data, “hopes” to retain it, so it can apply it on the test. It’s not storing the book, so what’s the actual difference?

      And if you have an answer to that, my follow up would be “what’s the effective difference?” If we stick an ai and a human in a closed room and give them a test, why does it matter the intricacies of how they are storing and recalling the data?

      • V0ldek@awful.systems
        link
        fedilink
        English
        arrow-up
        17
        ·
        7 months ago

        I’m not sure what you even mean by “how is it different”, but for starters a human can actually get a good mark at the bar and spicy autocomplete clearly cannot.

        • EatATaco@lemm.ee
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          11
          ·
          7 months ago

          spicy autocomplete clearly cannot.

          What you are basing this “it clearly cannot” on? Because an early iteration of it was mediocre at it? The first ICE cars were slower than horses, I’m afraid this statement may be the equivalent of someone pointing at that and saying “cars can’t get good at going fast.”

          But I specifically asked “in this regard”, referring to taking a test after previously having trained yourself on the data.

        • V0ldek@awful.systems
          link
          fedilink
          English
          arrow-up
          9
          ·
          edit-2
          7 months ago

          Give Ken Thompson and Dennis Ritchie billions of dollars

          I mean, if we took all net worth of Sam Altman and split it between these two guys who at least benefited humanity with their work we’d get at least a step closer to justice in the universe.

          Getting a Turing award: $1M

          Dropping out of Stanford to work on something unironically called “Loopt”: Priceless

      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        9
        ·
        7 months ago

        holy fuck you’re a moron

        please go read a book, and look at some art. no, marvel media doesn’t count.

        • JohnBierce@awful.systems
          link
          fedilink
          English
          arrow-up
          12
          ·
          7 months ago

          Me, about to suggest some actually really good, thought provoking Marvel comics that somehow got made alongside the relentless superhero soap opera: oh wait now isn’t the time, we’re dunking on the AI bro

    • Ozone6363@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      9
      ·
      7 months ago

      I’m not even going to engage in this thread cause it’s a tar pit, but I do think I have the appropriate analogy.

      Proceeds to actively engage in the thread multiple times