The lawyer said that he had “no idea” ChatGPT could fabricate information and that he “deeply” regretted his decision.

  • Kara@kbin.social
    link
    fedilink
    arrow-up
    13
    ·
    1 year ago

    I feel like a good lawyer should do a tiny bit of research before letting AI write a court brief. At least enough to read the 2 warnings on the ChatGPT site about how it can generate inaccurate information.

      • SJ_Zero@lemmy.fbxl.net
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 year ago

        It’s impressive how well ChatGPT hallucinates citations.

        I was asking it about a field of law I happen to be quite aware of (as a layman), and it came up with entire sections of laws that didn’t exist to support its conclusions.

        Large Language Models like ChatGPT are in my view verisimilitude engines. Verisimilitude is the appearance of being true or real. You’ll note, however, that it is not being true or real, simply appearing so.

        It’s trying to make an answer that looks right. If it happens to know the actual answer then that’s what it’ll go with, but if it doesn’t, it’ll go with what a correct answer might statistically look like. For fields with actual right and wrong answers like law and science and technology, its tendency to make things up is really harmful if the person using the tool doesn’t know it will lie.