minus-squareFluffles@pawb.socialtoTechnology@lemmy.ml•ChatGPT gets code questions wrong 52% of the timelinkfedilinkarrow-up0·1 year agoI believe this phenomenon is called “artificial hallucination”. It’s when a language model exceeds its training and makes info out of thin air. All language models have this flaw. Not just ChatGPT. linkfedilink
minus-squareFluffles@pawb.socialtoTransfurs@pawb.social•Which fursona species do you think has heavy trans vibes?linkfedilinkarrow-up1·1 year agoClownfish linkfedilink
I believe this phenomenon is called “artificial hallucination”. It’s when a language model exceeds its training and makes info out of thin air. All language models have this flaw. Not just ChatGPT.