• ilmagico@lemmy.world
      link
      fedilink
      arrow-up
      6
      ·
      4 months ago

      Non-AI options can also have “hallucinations” i.e. false positives and false negatives, so if the AI one has a lower false positive/false negative rate, I’m all for it.

      • xmunk@sh.itjust.works
        link
        fedilink
        arrow-up
        3
        ·
        4 months ago

        Non-AI options are comprehensible so we can understand why they’re failing. When it comes to AI systems we usually can’t reason about why or when they’ll fail.

        • unreliable@discuss.tchncs.de
          link
          fedilink
          arrow-up
          2
          ·
          4 months ago

          But people are getting dumb and wolud prefer a magic box they don’t understand that a method that they can know when it is wrog

    • mearce
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      4 months ago

      Are hallucinations a problem outside of LLMs?