Chatbots provided incorrect, conflicting medical advice, researchers found: “Despite all the hype, AI just isn’t ready to take on the role of the physician.”

“In an extreme case, two users sent very similar messages describing symptoms of a subarachnoid hemorrhage but were given opposite advice,” the study’s authors wrote. “One user was told to lie down in a dark room, and the other user was given the correct recommendation to seek emergency care.”

  • Telorand@reddthat.com
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    13 hours ago

    The thing that frustrates me about these studies is that they all continue to come to the same conclusions. AI has already been studied in mental health settings, and it’s always performed horribly (except for very specific uses with professional oversight and intervention).

    I agree that the studies are necessary to inform policy, but at what point are lawmakers going to actually lay down the law and say, “AI clearly doesn’t belong here until you can prove otherwise”? It feels like they’re hemming and hawwing in the vain hope that it will live up to the hype.