• iopq@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      17
      ·
      5 months ago

      Current AIs pass it, since most people can’t reasonably tell between AI and human-written stuff every time

      • 𝙲𝚑𝚊𝚒𝚛𝚖𝚊𝚗 𝙼𝚎𝚘𝚠
        link
        fedilink
        English
        arrow-up
        17
        ·
        5 months ago

        It’s dead simple to see if you’re talking to an LLM. The latest models don’t pass the Turing test, not even close. Asking them simple shit causes them to crap themselves really quickly.

        Ask ChatGPT how many r’s there are in “veryberry”. When it gets it wrong, tell it you’re disappointed and expect a correct answer. If you do that repeatedly, you can get it to claim there’s more r’s in the word than it has letters.

            • theherk@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              5 months ago

              Can you show the question you asked that led to this and which model was used? I just tested in several models, even slightly older ones and they all answered precisely. Of course if you follow up and tell it the right answer is wrong you can make it say stuff like this, but not one got it wrong out of the gate.

              • 𝙲𝚑𝚊𝚒𝚛𝚖𝚊𝚗 𝙼𝚎𝚘𝚠
                link
                fedilink
                English
                arrow-up
                8
                ·
                5 months ago

                My point is that telling it a right answer is wrong often causes LLMs to completely shit the bed. They used to argue with you nonsensically, now they give you a different answer (often also wrong).

                The only question missing at the start was "How many r’s are there in the word ‘veryberry’. I think raspberry also worked when I tried it. This was ChatGPT4-O. I did mark all the answers as bad, so perhaps they’ve fixed this one by now.

                Still, it’s remarkably trivial to get an LLM to provide a clearly non-human response.

                • theherk@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  5 months ago

                  Fair enough, but it does somewhat undercut your message that every model I’ve tested including quite old ones answer this question correctly on the first try. This image is ChatGPT-4o.

                  • 𝙲𝚑𝚊𝚒𝚛𝚖𝚊𝚗 𝙼𝚎𝚘𝚠
                    link
                    fedilink
                    English
                    arrow-up
                    7
                    ·
                    5 months ago

                    Perhaps it was being influenced by the chat history. But try asking how many r’s in raspberry, it does get that consistently wrong for me. And you can ask it those followup questions to easily get it to spout nonsense, and that was mostly my point; figuring out if you’re talking to an LLM is fairly trivial.