Tea to Artificial Intelligence @lemmy.sdf.orgEnglish · 1 day agoHallucinations in code are the least dangerous form of LLM mistakes.simonwillison.netexternal-linkmessage-square0fedilinkarrow-up15arrow-down11
arrow-up14arrow-down1external-linkHallucinations in code are the least dangerous form of LLM mistakes.simonwillison.netTea to Artificial Intelligence @lemmy.sdf.orgEnglish · 1 day agomessage-square0fedilink