Chatbots provided incorrect, conflicting medical advice, researchers found: “Despite all the hype, AI just isn’t ready to take on the role of the physician.”

“In an extreme case, two users sent very similar messages describing symptoms of a subarachnoid hemorrhage but were given opposite advice,” the study’s authors wrote. “One user was told to lie down in a dark room, and the other user was given the correct recommendation to seek emergency care.”

    • sbbq@lemmy.zip
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      2
      ·
      5 hours ago

      My dad always said, you know what they call the guy who graduated last in his class at med school? Doctor.

  • BeigeAgenda@lemmy.ca
    link
    fedilink
    English
    arrow-up
    36
    ·
    9 hours ago

    Anyone who have knowledge about a specific subject says the same: LLM’S are constantly incorrect and hallucinate.

    Everyone else thinks it looks right.

    • agentTeiko@piefed.social
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 hours ago

      Yep its why CLevels think its the Holy Grail they don’t see it as everything that comes out of their mouth is bullshit as well. So they don’t see the difference.

    • IratePirate@feddit.org
      link
      fedilink
      English
      arrow-up
      18
      ·
      8 hours ago

      A talk on LLMs I was listening to recently put it this way:

      If we hear the words of a five-year-old, we assume the knowledge of a five-year-old behind those words, and treat the content with due suspicion.

      We’re not adapted to something with the “mind” of a five-year-old speaking to us in the words of a fifty-year-old, and thus are more likely to assume competence just based on language.

      • leftzero@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 hours ago

        LLMs don’t have the mind of a five year old, though.

        They don’t have a mind at all.

        They simply string words together according to statistical likelihood, without having any notion of what the words mean, or what words or meaning are; they don’t have any mechanism with which to have a notion.

        They aren’t any more intelligent than old Markov chains (or than your average rock), they’re simply better at producing random text that looks like it could have been written by a human.

    • zewm@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      9 hours ago

      It is insane to me how anyone can trust LLMs when their information is incorrect 90% of the time.

  • Sterile_Technique@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    ·
    8 hours ago

    Chipmunks, 5 year olds, salt/pepper shakers, and paint thinner, also all make terrible doctors.

    Follow me for more studies on ‘shit you already know because it’s self-evident immediately upon observation’.

    • rudyharrelson@lemmy.radio
      link
      fedilink
      English
      arrow-up
      83
      arrow-down
      2
      ·
      11 hours ago

      People always say this on stories about “obvious” findings, but it’s important to have verifiable studies to cite in arguments for policy, law, etc. It’s kinda sad that it’s needed, but formal investigations are a big step up from just saying, “I’m pretty sure this technology is bullshit.”

      I don’t need a formal study to tell me that drinking 12 cans of soda a day is bad for my health. But a study that’s been replicated by multiple independent groups makes it way easier to argue to a committee.

      • irate944@piefed.social
        link
        fedilink
        English
        arrow-up
        27
        ·
        10 hours ago

        Yeah you’re right, I was just making a joke.

        But it does create some silly situations like you said

          • IratePirate@feddit.org
            link
            fedilink
            English
            arrow-up
            6
            ·
            8 hours ago

            A critical, yet respectful and understanding exchange between two individuals on the interwebz? Boy, maybe not all is lost…

      • Knot@lemmy.zip
        link
        fedilink
        English
        arrow-up
        15
        ·
        10 hours ago

        I get that this thread started from a joke, but I think it’s also important to note that no matter how obvious some things may seem to some people, the exact opposite will seem obvious to many others. Without evidence, like the study, both groups are really just stating their opinions

        It’s also why the formal investigations are required. And whenever policies and laws are made based on verifiable studies rather than people’s hunches, it’s not sad, it’s a good thing!

      • Telorand@reddthat.com
        link
        fedilink
        English
        arrow-up
        8
        ·
        10 hours ago

        The thing that frustrates me about these studies is that they all continue to come to the same conclusions. AI has already been studied in mental health settings, and it’s always performed horribly (except for very specific uses with professional oversight and intervention).

        I agree that the studies are necessary to inform policy, but at what point are lawmakers going to actually lay down the law and say, “AI clearly doesn’t belong here until you can prove otherwise”? It feels like they’re hemming and hawwing in the vain hope that it will live up to the hype.

      • BillyClark@piefed.social
        link
        fedilink
        English
        arrow-up
        4
        ·
        9 hours ago

        it’s important to have verifiable studies to cite in arguments for policy, law, etc.

        It’s also important to have for its own merit. Sometimes, people have strong intuitions about “obvious” things, and they’re completely wrong. Without science studying things, it’s “obvious” that the sun goes around the Earth, for example.

        I don’t need a formal study to tell me that drinking 12 cans of soda a day is bad for my health.

        Without those studies, you cannot know whether it’s bad for your health. You can assume it’s bad for your health. You can believe it’s bad for your health. But you cannot know. These aren’t bad assumptions or harmful beliefs, by the way. But the thing is, you simply cannot know without testing.

      • Eager Eagle@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        10 hours ago

        Also, it’s useful to know how, when, or why something happens. I can make a useless chatbot that is “right” most times if it only tells people to seek medical help.

    • hansolo@lemmy.today
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 hours ago

      I’m going to start telling people I’m getting a Master’s degree in showing how AI is bullshit. Then I point out some AI slop and mumble about crushing student loan debt.

    • scarabic@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 hours ago

      It’s actually interesting. They found the LLMs gave the correct diagnosis high-90-something percent of the time if they had access to the notes doctors wrote about their symptoms. But when thrust into the room, cold, with patients, the LLMs couldn’t gather that symptom info themselves.

      • Hacksaw@lemmy.ca
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 hours ago

        LLM gives correct answer when doctor writes it down first… Wowoweewow very nice!

  • sbv@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    2
    ·
    6 hours ago

    It looks like the LLMs weren’t trained for medical tasks. The study would be more interesting if it had been run on something built for the task.

  • homes@piefed.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    10 hours ago

    This is a major problem with studies like this : they approach from a position of assuming that AI doctors would be competent rather than a position of demanding why AI should ever be involved with something so critical, and demanding a mountain of evidence to prove why it is worthwhile before investing a penny or a second in it

    “ChatGPT doesn’t require a wage,” and, before you know it, billions of people are out of work and everything costs 10000x your annual wage (when you were lucky enough to still have one).

    How long until the workers revolt? How long have you gone without food?

  • GnuLinuxDude@lemmy.ml
    link
    fedilink
    English
    arrow-up
    8
    ·
    10 hours ago

    If you want to read an article that’s optimistic about AI and healthcare, but where if you start asking too many questions it falls apart, try this one

    https://text.npr.org/2026/01/30/nx-s1-5693219/

    Because it’s clear that people are starting to use it and many times the successful outcome is it just tells you to see a doctor. And doctors are beginning to use it, but they should have the professional expertise to understand and evaluate the output. And we already know that LLMs can spout bullshit.

    For the purposes of using and relying on it, I don’t see how it is very different from gambling. You keep pulling the lever, oh excuse me I mean prompting, until you get the outcome you want.

    • MinnesotaGoddam@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 hours ago

      the one time my doctor used it and i didn’t get mad at them (they did the google and said “the ai says” and I started making angry Nottingham noises even though all the ai did was tell us exactly what we had just been discussing was correct) uh, well that’s pretty much it I’m not sure where my parens are supposed to open and close on that story.

      • GnuLinuxDude@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 hours ago

        Be glad it was merely that and not something like this https://www.reuters.com/investigations/ai-enters-operating-room-reports-arise-botched-surgeries-misidentified-body-2026-02-09/

        In 2021, a unit of healthcare giant Johnson & Johnson announced “a leap forward”: It had added artificial intelligence to a medical device used to treat chronic sinusitis, an inflammation of the sinuses…

        At least 10 people were injured between late 2021 and November 2025, according to the reports. Most allegedly involved errors in which the TruDi Navigation System misinformed surgeons about the location of their instruments while they were using them inside patients’ heads during operations.

        Cerebrospinal fluid reportedly leaked from one patient’s nose. In another reported case, a surgeon mistakenly punctured the base of a patient’s skull. In two other cases, patients each allegedly suffered strokes after a major artery was accidentally injured.

        FDA device reports may be incomplete and aren’t intended to determine causes of medical mishaps, so it’s not clear what role AI may have played in these events. The two stroke victims each filed a lawsuit in Texas alleging that the TruDi system’s AI contributed to their injuries. “The product was arguably safer before integrating changes in the software to incorporate artificial intelligence than after the software modifications were implemented,” one of the suits alleges.

  • cecilkorik@piefed.ca
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    9 hours ago

    It’s great at software development though /s

    Remember that when software written by AI will soon replace all the devices doctors use daily.