alphacyberranger@lemmy.world to Programmer HumorEnglish · 1 year agoIts not wrong thoughlemmy.worldimagemessage-square134fedilinkarrow-up1903arrow-down185cross-posted to: programmer_humor
arrow-up1818arrow-down1imageIts not wrong thoughlemmy.worldalphacyberranger@lemmy.world to Programmer HumorEnglish · 1 year agomessage-square134fedilinkcross-posted to: programmer_humor
minus-squareGBU_28@lemm.eelinkfedilinkEnglisharrow-up3arrow-down1·1 year agoAgain you aren’t seeing this because these models are being developed for private enterprise purposes. Regarding deep machine code analysis, sure, that’s gonna take work but the whole hallucination thing is an off the shelf, rookie problem these days
minus-squareRikudou_Sage@lemmings.worldlinkfedilinkEnglisharrow-up1·1 year agoIt’s not, though. Hallucinations are inherent to the technology, it’s not a matter of training. Good training can greatly reduce the likelihood, but cannot solve it.
minus-squareGBU_28@lemm.eelinkfedilinkEnglisharrow-up1·1 year agoTraining doesn’t solve hallucination. I didn’t say that
Again you aren’t seeing this because these models are being developed for private enterprise purposes.
Regarding deep machine code analysis, sure, that’s gonna take work but the whole hallucination thing is an off the shelf, rookie problem these days
It’s not, though. Hallucinations are inherent to the technology, it’s not a matter of training. Good training can greatly reduce the likelihood, but cannot solve it.
Training doesn’t solve hallucination. I didn’t say that