ChatGPT is full of sensitive private information and spits out verbatim text from CNN, Goodreads, WordPress blogs, fandom wikis, Terms of Service agreements, Stack Overflow source code, Wikipedia pages, news blogs, random internet comments, and much more.

  • brianorca@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 year ago

    Diffusion AI (most image AI) works differently than an LLM. They actually start with noise, and adjust it iteratively to satisfy the prompt. So they don’t tend to reproduce entire images unless they are overtrained (i.e. the same image was trained a thousand times instead of once) or the prompt is overly specific. (i.e you ask for “The Mona Lisa by Leonardo”)

    But words don’t work well with diffusion, since dog and God are very different meanings despite using the same letters. So an LLM spits out a specific sequence of word tokens.

    • Kogasa
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      You could use diffusion to generate text. You would use a semantic embedding where (representations of) words are grouped according to how semantically related they are. Rather than dog/God, you would more likely switch dog for canine. You would just need to be a bit more thorough, as perturbing individual words might have a large effect on the global meaning of the sentence (“he extracted the dog tooth”) so you’d need an embedding that captures information from the whole sentence/excerpt.