• zygo_histo_morpheus
    link
    fedilink
    arrow-up
    1
    ·
    15 hours ago

    What are the odds that you’re actually going to get a bounty out of it? Seems unlikely that an AI would hallucinate an actually correct bug.

    Maybe the people doing this are much more optimistic about how useful LLMs are for this than I am but it’s possible that there’s some more malicious idea behind it.

    • CandleTiger
      link
      fedilink
      arrow-up
      1
      ·
      34 minutes ago

      Maybe the people doing this are much more optimistic about how useful LLMs are for this than I am

      Yes. That is the problem being reported in this article. There are many many people who have complete and unblemished optimism about how useful LLMs are, to the point where they don’t understand it’s optimism and don’t understand why other people won’t take them seriously.

      Some of them are professionals in related fields