• FatCrab@lemmy.one
    link
    fedilink
    arrow-up
    1
    arrow-down
    2
    ·
    21 hours ago

    Not that I agree they’re conscious, but this is an incorrect and overly simplistic definition of a LLM. They are probabilistic in nature, yea, and they work on tokens, or fragments, of words. But it’s about as much of an oversimplification to say humans are just markov chains that make plausible sentences that can come after [the context] as it is to say modern GPTs are.