An interesting and clever proposal to fix the prompt injection vulnerability.

  • The author proposes a dual Large Language Model (LLM) system, consisting of a Privileged LLM and a Quarantined LLM.
  • The Privileged LLM is the core of the AI assistant. It accepts input from trusted sources, primarily the user, and acts on that input in various ways. It has access to tools and can perform potentially destructive state-changing operations.
  • The Quarantined LLM is used any time untrusted content needs to be worked with. It does not have access to tools and is expected to have the potential to go rogue at any moment.
  • The Privileged LLM and Quarantined LLM should never directly interact. Unfiltered content output by the Quarantined LLM should never be forwarded to the Privileged LLM.
  • The system also includes a Controller, which is regular software, not a language model. It handles interactions with users, triggers the LLMs, and executes actions on behalf of the Privileged LLM.
  • The Controller stores variables and passes them to and from the Quarantined LLM, while ensuring their content is never provided to the Privileged LLM.
  • The Privileged LLM only ever sees variable names and is never exposed to either the untrusted content from the email or the tainted summary that came back from the Quarantined LLM.
  • The system should be cautious with chaining, where the output of one LLM prompt is piped into another. This is a dangerous vector for prompt injection.
  • 𝕊𝕚𝕤𝕪𝕡𝕙𝕖𝕒𝕟OPM
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    It seems like you might have missed the central idea of the article. The main point is that the privileged LLM won’t actually see the content itself, only the variable names. I encourage you to take a closer look at it.