@[email protected] to [email protected]English • 4 months agoWe have to stop ignoring AI’s hallucination problemwww.theverge.comexternal-linkmessage-square204fedilinkarrow-up1530arrow-down128
arrow-up1502arrow-down1external-linkWe have to stop ignoring AI’s hallucination problemwww.theverge.com@[email protected] to [email protected]English • 4 months agomessage-square204fedilink
minus-square@[email protected]linkfedilinkEnglish2•edit-24 months agoThey are right though. LLM at their core are just about determining what is statistically the most probable to spit out.
minus-square@[email protected]linkfedilinkEnglish0•4 months agoYour 1 sentence makes more sense than the slop above.
Wtf are you even talking about.
They are right though. LLM at their core are just about determining what is statistically the most probable to spit out.
Your 1 sentence makes more sense than the slop above.