cm0002@lemmy.world to Technology@lemmy.zipEnglish · 3 months agoChatGPT's hallucination problem is getting worse according to OpenAI's own tests and nobody understands whywww.pcgamer.comexternal-linkmessage-square15linkfedilinkarrow-up12arrow-down10cross-posted to: technology@lemmit.onlinetechnology@hexbear.nettechnology@lemmygrad.mltechnology@lemmy.ml
arrow-up12arrow-down1external-linkChatGPT's hallucination problem is getting worse according to OpenAI's own tests and nobody understands whywww.pcgamer.comcm0002@lemmy.world to Technology@lemmy.zipEnglish · 3 months agomessage-square15linkfedilinkcross-posted to: technology@lemmit.onlinetechnology@hexbear.nettechnology@lemmygrad.mltechnology@lemmy.ml
minus-squarefinitebanjo@lemmy.worldlinkfedilinkEnglisharrow-up0·3 months agoI think comparing a small model’s collapse to a large model’s corruption is a bit of a fallacy. What proof do you have that the two behave the same in response to poisoned data?
I think comparing a small model’s collapse to a large model’s corruption is a bit of a fallacy. What proof do you have that the two behave the same in response to poisoned data?