• ABC123itsEASY@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    2
    ·
    edit-2
    24 hours ago

    Your point has no bearing whatsoever on my statement. You could also misread a ruler but doesn’t mean there’s anything wrong with the ruler. Given I can reliably read a ruler, then I can ‘blindly trust’ it assuming it’s a well manufactured ruler. If you can’t that’s definitively a you problem.

    • ifItWasUpToMe@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      4
      ·
      23 hours ago

      I mean it kinda does. If all you do is type numbers into calculator and copy results there’s a chance the result is wrong.

      The same way some people use AI, which is wrong.

      • ABC123itsEASY@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        2
        ·
        23 hours ago

        My point wasn’t that people don’t make mistakes they obviously do. My point is that calculators are deterministic machines; to clarify that means if they have the same input they will always have the same output. LLMs are not and do not. So no it’s not the same thing.

        • ifItWasUpToMe@lemmy.ca
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          3
          ·
          23 hours ago

          I never said it was the same. I just said you have to be careful with tools you use. It applies to every tool.

          • ABC123itsEASY@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            23 hours ago

            You are implying that one must ensure the veracity of the output of a calculator in the same way that one must ensure the veracity of the output of an LLM and I’m saying no, that’s strictly not true. If it were than the only way you could use an LLM incorrectly would be to type your query incorrectly. With a calculator that metaphor holds up. With an LLM you could make no mistakes and still get incorrect output.

            • ifItWasUpToMe@lemmy.ca
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              4
              ·
              23 hours ago

              I’m implying that you should be careful when you use tools, and not blindly trust the output.