• m_‮f@discuss.online
    cake
    link
    fedilink
    English
    arrow-up
    1
    ·
    19 hours ago

    To be clear, I’m not finding fault with you specifically, I think most people use terms like conscious/aware/etc the way you do.

    The way of thinking about it that I find useful is defining “consciousness” to be the same as “world model”. YMMV on if you agree with that or find it useful. It leads to some results that seem absurd at first, like in another comment someone pointing out that it means that a thermometer is “conscious” of the temperature. But really, why not? It’s only a base definition, a way to find some objective foundation. Clearly, humans have a lot more going on than a thermometer, and that definition lets us focus on the more interesting bits.

    As stated, I’m not much into the qualia hype, but this part is I think an interesting avenue of thought:

    it likely won’t be possible to directly compare raw experiences because the required hardware to process a specific experience for one individual might not exist in the other individual’s mind.

    That seems unlikely if you think the human brain is equivalent to a Turing machine. If you could prove that the human brain isn’t equivalent, that would be super interesting. Maybe it’s a hypercomputer for reasons we can’t explain yet.

    Your project sounds interesting, if you ever publish it or a paper about it, I’d love to see it! I can’t judge about hobby projects being messy lol.

    • AnarchoEngineer@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      15 hours ago

      I definitely don’t think the human brain could be modeled by a Turing machine.

      In 1994, Hava Siegelmann proved that her new (1991) computational model, the Artificial Recurrent Neural Network (ARNN), could perform hypercomputation (using infinite precision real weights for the synapses)

      Since the human brain is largely comprised of complex recurrent networks, it stands to reason the same holds for it.

      The human brain is an analog computer and is—as far as I’m aware—an undecidable system. As in you cannot algorithmically predict the behavior of the net with certainty. Predictable behavior can arise but it’s probabilistic not certain.

      I also think I see what you’re saying with the thermometer being “conscious” of temperature, but that kind of collapses the definition of conscious to “influenced by” which makes the word superfluous. Using conscious to refer to an ability requiring learning of patterns of different sources of influence seems like a more useful definition.

      Also in the crazy unlikely event in which I actually end up creating a sentient thing, I’ll be hesitant to publish any work related to it.

      If my theory about how focus/attention work is correct, anything capable of focus must be capable of experiencing pain/irritation/agitation. I’m not fond of the idea of going “hey here’s how to create something that feels pain” to the world since a lot of people around me don’t even feel empathy for their own kind