The next time you’re due for a medical exam you may get a call from someone like Ana: a friendly voice that can help you prepare for your appointment and answer any pressing questions you might have.

With her calm, warm demeanor, Ana has been trained to put patients at ease — like many nurses across the U.S. But unlike them, she is also available to chat 24-7, in multiple languages, from Hindi to Haitian Creole.

That’s because Ana isn’t human, but an artificial intelligence program created by Hippocratic AI, one of a number of new companies offering ways to automate time-consuming tasks usually performed by nurses and medical assistants.

It’s the most visible sign of AI’s inroads into health care, where hundreds of hospitals are using increasingly sophisticated computer programs to monitor patients’ vital signs, flag emergency situations and trigger step-by-step action plans for care — jobs that were all previously handled by nurses and other health professionals.

  • Otter@lemmy.ca
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    6 hours ago

    The description sounds more like an AI receptionist than an AI nurse. It would be helpful if patients could ask follow-up questions to the automated phone call before an appointment. Some clinics don’t have the manpower for that, and especially not in all the languages that the local population might speak.

    I’d be interested in seeing how good the model actually is, and how it determines when to pass it along to a human

    The concern is with making sure the AI model is only used where it makes sense. Those who are looking to cut costs will try and use it everywhere, and that needs to be kept in check

  • RonnyZittledong@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    ·
    10 hours ago

    I am sure this “nurse” will do everything in it’s power to keep you from talking to an expensive real human. It will have a very soothing voice while extracting as much value from you as possible after it examines the insurance you have.

  • Chozo@fedia.io
    link
    fedilink
    arrow-up
    9
    arrow-down
    1
    ·
    9 hours ago

    I assume that this is using a highly-curated, custom model, and not some off-the-shelf GPT that just anybody can use, so it probably won’t be suggesting that patients eat glue or anything crazy.

    From what I can tell, it sounds like this is actually a fairly valid use for a chatbot, handling a lot of the tedious tasks that nurses are charged with. Most of what it seems to be doing, any untrained receptionist could also do (like scheduling appointments or reading dosage instructions), so this would free up nurses for actually important tasks like administering medications and triaging patients. It doesn’t seem like it’s going to be issuing prescriptions or anything where real judgement would be necessary.

    As long as hospital staff are realistic about what tasks the chatbot should handle, this actually seems like a pretty decent place to implement a (properly-tuned) LLM.

    • enumerator4829@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      2
      ·
      8 hours ago

      LLM training is expensive, so are prompt ”engineers”. This will be the cheapest off-the-shelf LLM they can find, prompted by someone’s nephew. People will be eating glue.

      • Nurse_Robot@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        2
        ·
        7 hours ago

        The healthcare industry has the money to be innovative, and the massive lawsuit risks to do it safely. I agree with the person you’re replying to, and feel that your sarcastic dismissive response is probably a knee jerk reaction to anything AI you come across

        • enumerator4829@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          6 hours ago

          But why use money to innovate when there is profit to be made and laws are just made up?

          AI is the new kid on the block, trying to make a dent in our society. So far, we don’t really have that many useful or productive deployments. It’s on AI to prove it’s worth, and it’s kinda worthless until proven otherwise. (Name one interaction with a commercially deployed AI model you didn’t hate?)

          So far, Apple is failing with consumer products, Microsoft is backing off on GPU-orders, research showing commercial GenAI isn’t increasing productivity, NVDA seems to cool off and you expect the benevolent commercial health care industry to come to the rescue?

          Yeah, I’ll keep my knee jerk reaction and keep living with my current socialised health care.

          • Nurse_Robot@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            6 hours ago

            AI helped me write papers in college, helped write letters to relatives, helped me create a very successful GoFundMe when my grandfather was hospitalized, helped me self diagnose a skin condition, and helped my mental health when I couldn’t see a therapist. There are 5 interactions with commercially deployed AI models I didn’t hate. There are a lot more.

            • enumerator4829@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              1
              ·
              6 hours ago

              I’m using ”Commercially deployed” in the context of ”company you interacted with had an AI represent them in that communication”. You don’t use AI for that to increase costumer satisfaction. (I wonder why I haven’t seen any AI products targeted at automated B2B sales?)

              I won’t argue that GenAI isn’t useful for end consumers using it properly. It is.

              (As an aside, I hope you and your grandfather get better!)