• daniskarma@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      6
      arrow-down
      3
      ·
      edit-2
      2 hours ago

      Could you be a little more constructive and point me at the points that are wrong and useless?

      Thank you.

      • ben_dover@lemmy.world
        link
        fedilink
        arrow-up
        13
        arrow-down
        1
        ·
        2 hours ago

        i think the point is that the answer is not reliable. it might be completely correct or borderline wrong, or something in between, and there’s no way to tell without verifying everything it says - and then one could look it up oneself in the first place already.

        • person420@lemmynsfw.com
          link
          fedilink
          arrow-up
          2
          arrow-down
          7
          ·
          2 hours ago

          If you’re using AI verbatim without looking up answers and verifying results, then that’s on you.

          When you Google something, do you take the first result and just assume it’s fact? You shouldn’t for AI either.

          • Grazed@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            7 minutes ago

            The main problem I see is that Google just shouldn’t include AI results. And they definitely shouldn’t put their unreliable LLM front and center on the results page. When you google something, you want accurate information, which the LLM might have, but only if that data was readily available to begin with. So the stuff it can help with is stuff the search would put first already.

            For anything requiring critical thought or research, the LLM will often hallucinate or misrepresent. The danger is that people do not always apply critical thinking. Defaulting to showing an LLM response is extremely dangerous, and it’s basically pointless.

            • gamermanh@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              1
              ·
              9 minutes ago

              Honestly? It’s a great place to start, especially with every search engine being worse than anything pre-2018

              I used to have to post my error codes to a forum if googling them didn’t immediately get my anywhere and pray someone would reply something actually useful some day

              Now I can ask ChatGPT to point me at something and go from there. If it assumes wrongly about anything I can correct it rather simply. Its really good at turning documentation written by somebody who hasn’t spoken to another human in 15 years into something my stupid ass can better understand, too

              AI is a powerful as shit tool, people who slag it as not having any utility are about as ignorant as the people saying it’s the second coming of Jesus himself

              I’m personally planning to host a local model to avoid supporting commercial shit, but that’s a project for down the line rn

      • MrMcGasion@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        1 hour ago

        While I’m not the person you replied to and don’t know what their argument would be, I’ll take a shot at giving my own answer. In many cases when people post examples of AI giving unhelpful or bad information, there’s often someone who runs off to their favorite LLM to see if it gives a better result, and it usually does, so it gets treated like user error for using the wrong LLM or not wording the prompt properly. When in other examples that person’s favorite LLM which gave the correct answer this time, is the bad example hallucinating or mixing unrelated concepts, and other people are in the comments promoting other LLMs that gave them a good reply this time. None of the LLMs are actually trustworthy consistently enough to be trusted alone, and you won’t really know what answer is trustworthy unless you ask several LLMs and then go research the topic on your own anyway to figure out which answer is the most correct. It’s a valid point that ChatGPT got the answer more right than Gemini this time, but it’s somewhat useless to know that because other times ChatGPT is the one hallucinating wildly, and Gemini has the right answer, but since they’ve all been wrong before who do you trust.

        LLMs are like asking an arrogant person who thinks they know everything, who rather than admitting what they don’t know, will pull an answer out of their butt, and while it might be a logical answer, it isn’t based in reality, and may still be wildly wrong. If you already mostly know the answer, maybe asking the arrogant person works, because you already know enough to know if they are speaking from their actual knowledge or making up an answer, but if you don’t already have knowledge on a topic, you won’t know whether the arrogant person is giving useful information or not.