• mark
    link
    fedilink
    English
    arrow-up
    5
    ·
    6 hours ago

    yup and when you DO catch it spitting out nonsense. it"ll say “oh you right, let me change that”… 🙄 like, why do I have to tell you that you’re wrong about something? You should already know it’s wrong and fix it without me ever pointing it out.

    • Rooster326
      link
      fedilink
      English
      arrow-up
      11
      ·
      5 hours ago

      But it didn’t even understand it was wrong

      It can’t understand that. It can’t understand anything

      The Human-feedbaxk algorithm dictates humans prefer to receive an apology so it does.

    • SparroHawc@lemmy.zip
      link
      fedilink
      English
      arrow-up
      7
      ·
      5 hours ago

      That’s because it doesn’t really ‘know’ things in the same way you and I do. It’s much more like having a gut reaction to something and then spitting it out as truth; LLMs don’t really have the capability to ruminate about something. The one pass through their neural network is all they get unless it’s a ‘reasoning’ model that then has multiple passes as it generates an approximation of train-of-thought - but even then, its output is still a series of approximations.

      When its training data had something resembling corrections in it, the most likely text that came afterwards was ‘oh you’re right, let me fix that’ - so that’s what the LLM outputs. That’s all there is to it.