ChatGPT generates cancer treatment plans that are full of errors — Study finds that ChatGPT provided false information when asked to design cancer treatment plans::Researchers at Brigham and Women’s Hospital found that cancer treatment plans generated by OpenAI’s revolutionary chatbot were full of errors.

  • @[email protected]
    link
    fedilink
    English
    28
    edit-2
    1 year ago

    It’s not merely a preconception. It’s a rather obvious and well-known limitation of these systems. What I am decrying is that some people, from apparent ignorance, think things like “ChatGPT can give a reliable cancer treatment plan!” or “here, I’ll have it write a legal brief and not even check it for accuracy”. But sure, I agree with you, minus the needless sarcasm. It’s useful to prove or disprove even absurd hypotheses. And clearly people need to be definitely told that ChatGPT is not always factual, so hopefully this helps.

    • @[email protected]
      link
      fedilink
      English
      81 year ago

      I’d say that a measurement always trumps arguments. At least you know how accurate they are, this statement cannot follow from reason:

      The JAMA study found that 12.5% of ChatGPT’s responses were “hallucinated,” and that the chatbot was most likely to present incorrect information when asked about localized treatment for advanced diseases or immunotherapy.

      • @[email protected]
        link
        fedilink
        English
        51 year ago

        That’s useful. It’s also good to note that the information the agent can relay depends heavily on the data used to train the model, so it could change.