ChatGPT Out-scores Medical Students on Complex Clinical Care Exam Questions::A new study shows AI’s capabilities at analyzing medical text and offering diagnoses — and forces a rethink of medical education.

  • morsebipbip@lemm.ee
    link
    fedilink
    English
    arrow-up
    19
    ·
    1 year ago

    That’s interesting but never forget the difference between exams and real life is huge. Exam test cases are always sorta typical clinical presentations, every small element pointing towards the general picture.

    In real life, there are almost always discrepancies, elements that don’t make sense at all for the given case, and the whole point of getting some residency experience is to be able to know what to make out of those contradictory elements. When to question nonsensical lab values. What to do when a situation doesn’t belong in any category of problems you learned to solve.

    Many things i think generative AI, due to its generative nature of predicting what word is most likely to come next based on learned data, wouldn’t be able to do

    • jsveiga@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 year ago

      How would the students fare if they had access to all the information available on the internet, used to train the AI, during the test?

      • Tilted
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 year ago

        We probably should be training the students to use the AI as a tool

        • RaincoatsGeorge@lemmy.zip
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          1 year ago

          I’ve used chat gpt a bit to see what it spits out in terms of medical education. I don’t trust it to be completely accurate but for the things where I’m able to verify it is true it does surprisingly well. There are a number of databases that exist with specifically verified content that is current and reliable that doctors use. If you could isolate the ai to only use that information you could reduce the risk of it spitting out false information and doctors could use it to spitball ideas or get assistance pulling protocols and guidelines and whatnot. I definitely could see language model ai like this getting used to assist clinical providers in the future. I could also see it used to further automate patient monitoring which we already do quite a bit but still struggle to master. Current ai models can identify high risk patients hours before a human can identify them and they improve outcomes. This will only continue but it will certainly not be replacing humans in this equation anytime soon.

      • FuglyDuck@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        4
        ·
        1 year ago

        for a human, that’s probably too much information to be useful. That’s why chatGPT is so powerful. It can sort through all that cruft and find “relevant” information.

        it’s an incredibly complicated set of If-then statements that lead it through a decision tree; ultimately responding to a prompt using what is most commonly followed up in similar prompts on the internet.

        It fails on knowing if the information is useful, or even correct, however. and it receives the biases inherited both from the people who wrote the if-then statements and the data to which it was fed. Further, the narrow AI’s we have today have no agency, no creativity or intuition. It fakes all of these things in order to make us believe it’s ‘real’- that’s what it’s programed to do.

    • Bilbo Baggins@hobbit.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      4
      ·
      1 year ago

      I hope you’re wrong. If there’s one job I want AI to do, it’s to improve health care.

      There are many excellent doctors, but also many very average doctors. And even the best doctors seem to be biased towards the most common illnesses.

      And I’ve read that many people with persistent pain, especially people of color, cannot get medicine because doctors suspect everyone of being abusers. But, giving it out like candy isn’t great either.

      We need AI doctors.

      • morsebipbip@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        don’t get me wrong, human doctors (humans in general actually) have a lot of problems and it would be great to have some kind of AI assistance for diagnosis or management. But I don’t think generative AI like chatGPT is actual AI : it’s a probabilistic algorithm that spits out the word most likely to be after the last one it wrote, based on the material it was trained on. I don’t think we need a doctor like that.