codeinabox to ProgrammingEnglish · 2 months agoTurn off Cursor, turn on your mindallvpv.orgexternal-linkmessage-square19linkfedilinkarrow-up174arrow-down18cross-posted to: [email protected]
arrow-up166arrow-down1external-linkTurn off Cursor, turn on your mindallvpv.orgcodeinabox to ProgrammingEnglish · 2 months agomessage-square19linkfedilinkcross-posted to: [email protected]
minus-squareMichallinkfedilinkarrow-up8·2 months agoYou still need a software engineer to review the code. It’s naive to think that randomly generated code will work, and by “work” i mean not just do what it’s supposed to, but also handle edge cases and be secure.
minus-squareonlinepersonalinkfedilinkarrow-up1arrow-down12·2 months agoIf you think it’s random, you don’t understand LLMs.
minus-squareMichallinkfedilinkarrow-up6arrow-down2·2 months agoSo, in your learned opinion, it’s deterministic?
minus-squarethinkercharmercoderfarmer@slrpnk.netlinkfedilinkarrow-up1·2 months agoYou sent me down a bit of a rabbit hole, but it turned up an interesting answer. Turns out they are nondeterministic, and why they aren’t deterministic is still an open question https://thinkingmachines.ai/blog/defeating-nondeterminism-in-llm-inference/
minus-squareMichallinkfedilinkarrow-up1·2 months agoInteresting, I had assumed that turning down temperature to 0, or hardcoding a seed would make LLM inference deterministic. Especial after watching this video https://youtu.be/J9ZKxsPpRFk
minus-squarethinkercharmercoderfarmer@slrpnk.netlinkfedilinkarrow-up1·2 months agoI had thought I had seen both as well.
minus-squarethinkercharmercoderfarmer@slrpnk.netlinkfedilinkarrow-up1·2 months agoSkipped over the opening graphic on first read but just read it. Could they have picked a creepier sample sentence.
minus-squareonlinepersonalinkfedilinkarrow-up1arrow-down7·2 months agoAnd you are perfectly deterministic? Because if you aren’t, by your own dichotomic logic, you’re random too.
minus-squareMichallinkfedilinkarrow-up3arrow-down1·2 months agoSo you say its not random, and now you do a 180 and say that randomness is a good thing? I should have known you are a troll
minus-squareonlinepersonalinkfedilinkarrow-up1arrow-down2·2 months ago🤣 Is English your second or third language? Your reading comprehension is pretty bad if you understood me doing a 180.
You still need a software engineer to review the code. It’s naive to think that randomly generated code will work, and by “work” i mean not just do what it’s supposed to, but also handle edge cases and be secure.
If you think it’s random, you don’t understand LLMs.
So, in your learned opinion, it’s deterministic?
You sent me down a bit of a rabbit hole, but it turned up an interesting answer. Turns out they are nondeterministic, and why they aren’t deterministic is still an open question https://thinkingmachines.ai/blog/defeating-nondeterminism-in-llm-inference/
Interesting, I had assumed that turning down temperature to 0, or hardcoding a seed would make LLM inference deterministic.
Especial after watching this video https://youtu.be/J9ZKxsPpRFk
I had thought I had seen both as well.
Skipped over the opening graphic on first read but just read it. Could they have picked a creepier sample sentence.
And you are perfectly deterministic? Because if you aren’t, by your own dichotomic logic, you’re random too.
So you say its not random, and now you do a 180 and say that randomness is a good thing?
I should have known you are a troll
🤣 Is English your second or third language? Your reading comprehension is pretty bad if you understood me doing a 180.