• @sweng
    link
    36 months ago

    The point is that the second LLM has a hard-coded prompt

    • @[email protected]
      link
      fedilink
      16 months ago

      I don’t think that can exist within the current understanding of LLMs. They are probabilistic, so nothing is 0% or 100%, and slight changes to input dramatically change the output.