Researchers create 30 fake student accounts to submit model-generated responses to real exams. Professors grade the 200 or 1500 word responses from the AI undergrads and gave them better grades than real students 84% of the time. 6% of the bot respondents did get caught, though… for being too good. Meanwhile, AI detection tools? Total bunk.

Will AI be the new calculator… or the death of us all (obviously the only alternative).

Note: the software was NOT as good on the advanced exams, even though it handled the easier stuff.

  • @[email protected]
    link
    fedilink
    English
    753 months ago

    Not at all surprising. ChatGPT ‘knows’ a course’s content insofar as it’s memorized the textbook and all the exam questions. Once you start asking it questions it’s never seen before (more likely for advanced topics that don’t have a billion study guides and tutorials for) it falls short, even for basic questions that’d just require a bit of additional logic.

    Mind you, memorizing everything is impressive and can get you a degree, but when tasked with a new problem never seen before ChatGPT is completely inadequate.

    • @[email protected]
      link
      fedilink
      English
      263 months ago

      Right? Can students use the internet on this test? Because the LLMs have the entire internet to search for the answers, and I guarantee you those textbooks and exam questions are online and searchable.

      • vortic
        link
        fedilink
        English
        173 months ago

        I wonder how undergrads would do on the same exams given unlimited time and internet access but with LLMs blocked. That’s essentially what the LLMs have.

    • @[email protected]
      link
      fedilink
      English
      193 months ago

      Memorizing everything is impressive for a human.

      It’s less impressive for a computer.

    • @[email protected]
      link
      fedilink
      English
      63 months ago

      This is incorrect as was shown last year with the Skill-Mix research:

      Furthermore, simple probability calculations indicate that GPT-4’s reasonable performance on k=5 is suggestive of going beyond “stochastic parrot” behavior (Bender et al., 2021), i.e., it combines skills in ways that it had not seen during training.

  • @lowleveldata
    link
    English
    333 months ago

    I don’t care. Maid robot when

  • @[email protected]
    link
    fedilink
    English
    233 months ago

    I take it that this was social sciences because based on what I have seen so far I don’t think it can even outperform a college kid in maths

  • @[email protected]
    link
    fedilink
    English
    173 months ago

    All this moral panic is garbage.

    Easily solved by using essays with an unseen question written in exam conditions as assessment instruments.

    Literally a pencil and paper solves this problem.

    • AwesomeLowlander
      link
      fedilink
      English
      83 months ago

      A lot of students do not perform well under exam conditions due to stress and pressure. Also, unless you’re entirely eliminating coursework, it doesn’t remove the issue.

      • @[email protected]
        link
        fedilink
        English
        -23 months ago

        No assessment method is perfectly suited to every student.

        Coursework can be similarly adapted.

          • @[email protected]
            link
            fedilink
            English
            -73 months ago

            It’s not my job to educate you on how the education industry works. Go and read what qualified people have already written about it in academic journals.

  • AutoTL;DRB
    link
    fedilink
    English
    73 months ago

    This is the best summary I could come up with:


    “Since the rise of large language models like ChatGPT there have been lots of anecdotal reports about students submitting AI-generated work as their exam assignments and getting good grades.

    His team created over 30 fake psychology student accounts and used them to submit ChatGPT-4-produced answers to examination questions.

    The anecdotal reports were true—the AI use went largely undetected, and, on average, ChatGPT scored better than human students.

    Scarfe’s team submitted AI-generated work in five undergraduate modules, covering classes needed during all three years of study for a bachelor’s degree in psychology.

    Shorter submissions were prepared simply by copy-pasting the examination questions into ChatGPT-4 along with a prompt to keep the answer under 160 words.

    Turnitin’s system, on the other hand, was advertised as detecting 97 percent of ChatGPT and GPT-3 authored writing in a lab with only one false positive in a hundred attempts.


    The original article contains 519 words, the summary contains 144 words. Saved 72%. I’m a bot and I’m open source!

  • @[email protected]
    link
    fedilink
    English
    43 months ago

    falls short later

    So far… Next model will be even better, and it won’t stop getting better.