• @[email protected]
    link
    fedilink
    English
    64 months ago

    They should provide that instantly if the patient wants it (once the scan is developed). Ad whatever disclaimers and waivers you want, but I wouldn’t mind an instant answer.

    • @[email protected]
      link
      fedilink
      English
      44 months ago

      Or, just have it as part of the xrqy software.

      Analysis determines this could be X, here’s a link to Kore info on this rate condition. Please confirm diagnosis and report.

      We don’t need AI to make a diagnosis. Its a tool. The health professional can be trained in its use, just like they do for any other test.

      • @[email protected]
        link
        fedilink
        English
        24 months ago

        If you tell a profesional that the answer is “B”, while the professional had “A” in mind, you will have to convince them on why “B” is the correct answer, or they will ignore your suggestion. I think a good LLM model should be able to tell which features it valued most in it’s reasoning. It would make it much easier to get used to as a tool that way.

        • @[email protected]
          link
          fedilink
          English
          14 months ago

          I agree, while they are sceotical. However research data over time should show sensitivity and specificity, just like any other test.