• @[email protected]
      link
      fedilink
      English
      517 days ago

      Which makes the difference between the AIs and humans lower, likely increasing the significance of the result.

      • @[email protected]
        link
        fedilink
        English
        217 days ago

        Aye, I’d wager Claude would be closer to 58-60. And with the model probing Anthropic’s publishing, we could get to like ~63% on average in the next couple years? Those last few % will be difficult for an indeterminate amount of time, I imagine. But who knows. We’ve already blown by a ton of “limitations” that I thought I might not live long enough to see.

        • @[email protected]
          link
          fedilink
          English
          216 days ago

          The problem with that is that you can change the percentage of people who identify correctly other humans as humans. Simply by changing the way you setup the test. If you tell people they will be, for certain, talking to x amount of bots, they will make their answers conform to that expectation and the correctness of their answers drop to 50%. Humans are really bad at determining whether a chat is with a human or a bot, and AI is no better either. These kind of tests mean nothing.

          • @[email protected]
            link
            fedilink
            English
            116 days ago

            Humans are really bad at determining whether a chat is with a human or a bot

            Eliza is not indistinguishable from a human at 22%.

            Passing the Turing test stood largely out of reach for 70 years precisely because Humans are pretty good at spotting counterfeit humans.

            This is a monumental achievement.

            • @[email protected]
              link
              fedilink
              English
              0
              edit-2
              16 days ago

              First, that is not how that statistic works, like you are reading it entirely wrong.

              Second, this test is intentionally designed to be misleading. Comparing ChatGPT to Eliza is the equivalent of me claiming that the Chevy Bolt is the fastest car to ever enter a highway by comparing it to a 1908 Ford Model T. It completely ignores a huge history of technological developments. There have been just as successful chatbots before ChatGPT, just they weren’t LLM and they were measured by other methods and systematic trials. Because the Turing test is not actually a scientific test of anything, so it isn’t standardized in any way. Anyone is free to claim to do a Turing Test whenever and however without too much control. It is meaningless and proves nothing.