• NuXCOM_90Percent@lemmy.zip
    link
    fedilink
    arrow-up
    22
    arrow-down
    2
    ·
    5 days ago

    AI mostly lies to us because it is trained on data containing lies, misinformation, and nonsense.

    I have no idea why that would feel like a pertinent thing to say. Hmm.

    • Rhaedas@fedia.io
      link
      fedilink
      arrow-up
      5
      arrow-down
      1
      ·
      5 days ago

      And what is that data? The internet. Maybe just dumping everything in to the mix wasn’t a great idea. But they had to do it before anyone noticed them stealing everyone’s data and to be first. It also doesn’t help that most (all?) AI is trained with reward systems that encourage making the human happy with the result…not being accurate. That’s why you can change its mind if it gives a wrong or a right answer, it’s just wanting to succeed in being a helpful AI assistant. Because in training when it didn’t act that way, it was at best not giving points that it values, and at worse…punished in some way?

    • Bronzebeard@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      5 days ago

      LLMs “lie” to us because they’re glorified autocorrect programs that slap words together that often appear near each other without any actual understanding of what those words mean when combined.