• onion@feddit.de
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    4
    ·
    9 个月前

    You missed my point I think. There is a finite number of possible prompts and settings resulting in a finite number of possible images. I wasn’t talking about training sets at all

    • okamiueru@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      9 个月前

      There is a finite number of possible prompts

      Why do you think this is the case? When people say “finite”, it’s not obvious what they mean by it. Everything humans have made is finite, anything computers can represent, is finite, mathematically speaking. But, “finite” in mathematics, is as unsuited to use to describe the number of permutations of concepts in prompts, as it is to say that there is only a “finite amount of images” a computer can generate. It’s not a very interesting observation, because of course it is finite in this sense. Not to mention that the limits of this “finite”-ness, is still further bounded by human comprehension.

      But, you are saying that this is missing your point… it’s difficult to know what that point is is. Because, if your point is that it is finite, in the mathematical sense, then… yes, you are technically right.

      1. The latent space is discrete
      2. The latent space is ultimately bounded
      3. The latent space can only model a N-dimensional space for N “concepts”

      All these three statements are true. The problem is mixing a vernacular use and understanding of finite, with a mathematical one.

      The first one is similar to saying there are only a set of “finite images”, because Width * height * channels * (2^color_depth-1) is finite. The second one is also a bit strange to refer to it as such, because of course it will be bounded by the concepts that can exist and can be identified from training sets. The third one is fine. However, because there are a finite number of concepts, which then implies a mathematical finite number of permutations,… suggesting that this is “finite” in terms of being a “a catalogue”, is where you crossed the line between “technically correct” and “practically incorrect”.

      Yes, the permutations are finite, but… putting all three together, there are as many permutations as there is resolution in the N-dimensional discretization of the latent space. I.e. similar to “all possible images a computer can output for a given color depth, channel count and size” is “finite”. If you mean to be “technically correct”, then it isn’t an interesting statement. If you allude to a vernacular understanding of finite (i.e. “just a catalogue”), then it simply isn’t correct.

      • onion@feddit.de
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 个月前

        My original comment was about the timing when the image is ‘created’. The ‘finiteness’ was supposed to be the supporting argument, but the other commenter put it in better words: “deterministic” (Even though they disagree on that). I’m not sure why you are so hung up on the word ‘catalogue’, that was just illustrative wording.

        If you mean to be “technically correct”, then it isn’t an interesting statement.

        My comment startet with the very word “technically”. If you find that uninteresting you’re free to just scroll by

        • okamiueru@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          edit-2
          9 个月前

          Naah. I call BS. You know what you’re implying.

          TeChNicAlLy it sort of already existed. A diffusion model can only generate an impossibly huge but finite number of images, and the content of these images is determined when the model is created. So when you ‘generate’ images you’re kinda just browsing a catalog

          The “already existed” gives it away. No, this didn’t “sort of already existed”, and there is no technically correct interpretation that can give you the benefit of the doubt either.

          content of these images is determined when the model is created

          Nope. The diffusion model guides a random noise image. Mathematically, any image can be generated out of pure randomness without any effect by model. See how silly and uninteresting “technical correct” makes things?

          So when you ‘generate’ images you’re kinda just browsing a catalog

          Nope.

          It is OK to admit you were wrong. No need to double down on this. I just wanted to explain why it was wrong.