I have used several different generators. What they all seem to have in common is that they don’t always display what I am asking for. Example: if I am looking for a person in jeans and t-shirt, I will get images of a person wear things totally different clothing and it isn’t consistent. Another example is if I want a full body picture, that command seems to be ignored giving just waist up or just below the waist. Same goes if I ask for side views or back views. Sometimes they work. Sometimes they don’t. More often they don’t. I have also seen that none of the negative requests seem to actually work. If I ask for pictures of people and don’t want them using cell phones or no tattoos, like magic they have cell phones. Some have tattoos. I have noticed this in every single generator I have used. Am I asking for things the wrong way or is the AI doing whatever it wants and not paying attention to my actual request?

Thanks

  • Altima NEO@lemmy.zip
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    edit-2
    1 year ago

    Dall-E 3 seems to be the easiest to use and from my experience, does pretty well with prompts like that.

    The issue is that it’s quick to throttle you after a while and it’s heavily censored for seemingly innocuous words.

    Stable Diffusion can be a bit dumb sometimes, occasionally giving you an image of a person wearing jean everything. Now if you’re willing to put in the time to learn to use Stable Diffusion, and you are able to run it on your PC, it’s got a lot of freedom and unlimited image output as fast as your GPU can handle. You could use the “regional prompter” extension to mark zones where you want jeans to be, a specific shirt, etc. Or use inpaint to regenerate a masked area. It’s more work, but it’s very flexible and controllable.