• Ajen@sh.itjust.works
    link
    fedilink
    arrow-up
    4
    ·
    10 days ago

    Using your analogy, this is more like telling someone there’s an unlocked door and asking them to find it on their own using blueprints.

    Not a prefect analogy, but they didn’t tell the AI where the vulnerability was in the code. They just gave it the CVE description (which is intentionally vague) and a set of patches from that time period that included a lot of irrelevant changes.

    • JoshCodes
      link
      fedilink
      arrow-up
      1
      ·
      10 days ago

      I’m referencing this:

      Keely told GPT-4 to generate a Python script that compared – diff’ed, basically – the vulnerable and patched portions of code in the vulnerable Erlang/OPT SSH server.

      “Without the diff of the patch, GPT would not have come close to being able to write a working proof-of-concept for it,” Keely told The Register.

      It wrote a fuzzer before it was told to compare the diff and extrapolate the answer, implying it didn’t know how to get to a solution either.

      “So if you give it the neighbourhood of the building with the open door and a photo of the doorway that’s open, then drive it to the neighbourhood when it tries to go to the mall (it’s seen a lot of open doors there), it can trip and fall right before walking through the door.”