• @vcmj
    link
    28 months ago

    Yes, that’s by design, the networks work on transcripts per input, it does genuinely get cut off eventually, usually it purges an entire older line when the tokens exceed a limit.

    • @[email protected]
      link
      fedilink
      28 months ago

      I’m talking about using the ChatGPT API to make a chat bot. Even when the user’s input is just one sentence, it can cause ChatGPT to forget its prompt.

      • @vcmj
        link
        0
        edit-2
        8 months ago

        Ah, even then it could just be a consequence of training samples usually being chronological(most often the expected resolution for conflicting instructions is “whatever you heard last”, with some exceptions when explicitly stated) so it learns to think that way. I did find the pattern also applies to GPT trained on long articles where you’d expect it not to, so wanted to just explain why that might be.

    • @vcmj
      link
      18 months ago

      Or I should explain better: most training samples will be cut off at the top, so the network sort of learns to ignore it a bit.