• Rhaedas@kbin.social
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    In the context of LLMs, I think that means giving them access to their own outputs in some way.

    That’s what the AUTOGPTs do (as well as others, there’s so many now) they pick apart the task into smaller things and feed the results back in, building up a final result, and that works a lot better than just a one time mass input. The biggest advantage and main reason for these being developed was to keep the LLM on course without deviation.

    • jadero
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Thanks, I didn’t know that. I guess I need to broaden my reading.

      • Rhaedas@kbin.social
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        It changes so much so fast. For a video source to grasp the latest stuff I’d recommend the Youtube channel “AI Explained”.