Llms are fucking stupid. They regularly ignore directions, restrictions, hallucinate fake information, and spread misinformation because of unreliable training data (like hoovering down everything on the internet en masse).
I mean, how is that meaningfully different from average human intelligence?
Average human intelligence is not bound by strict machine logic quantifying language into mathematical algorithms, and is also sapient on top of sentient.
Machine learning LLMs are neither sentient nor sapient.
I mean, how is that meaningfully different from average human intelligence?
Average human intelligence is not bound by strict machine logic quantifying language into mathematical algorithms, and is also sapient on top of sentient.
Machine learning LLMs are neither sentient nor sapient.
Those are distinct points from the one I made, which was about the characteristics listed. Sentience and sapience do not preclude a propensity to
How do you know that we are not bound by strict logic?