return2ozma@lemmy.world to Technology@lemmy.worldEnglish · 1 month agoHuge Study of Chats Between Delusional Users and AI Finds Alarming Patternsfuturism.comexternal-linkmessage-square104linkfedilinkarrow-up1371arrow-down113cross-posted to: [email protected]
arrow-up1358arrow-down1external-linkHuge Study of Chats Between Delusional Users and AI Finds Alarming Patternsfuturism.comreturn2ozma@lemmy.world to Technology@lemmy.worldEnglish · 1 month agomessage-square104linkfedilinkcross-posted to: [email protected]
minus-squareageedizzle@piefed.cadeleted by creatorlinkfedilinkEnglisharrow-up12arrow-down2·edit-210 days agodeleted by creator
minus-squareaffenlehrer@feddit.orglinkfedilinkEnglisharrow-up3·1 month agoAlso, the LLM is just predicting it, it’s not selecting it. Additionally it’s not limited to the role of assistant, if you (mis) configure the inference engine accordingly it will happily predict user tokens or any other token (tool calls etc).
deleted by creator
Also, the LLM is just predicting it, it’s not selecting it. Additionally it’s not limited to the role of assistant, if you (mis) configure the inference engine accordingly it will happily predict user tokens or any other token (tool calls etc).