Scientists have built a framework that gives generative AI systems like DALL·E 3 and Stable Diffusion a major boost by condensing them into smaller models — without compromising their quality.
These kind of performance improvements have really cool potential for real time image/texture generation in games. I’ve already seen some games do this, but they usually rely on generating the images online.
ASCII and low graphic roguelike’s have a lot of generation freedom where they can create very unique monsters/items/etc. However a lot of this flexibility is lost as you move to more polished games that require models and art assets for everything. This is also one of the many reasons that old-styled games are still popular, is because they often offer more variety and randomization than newer titles. I think generated art assets could be a cool way to bridge the gap though, and let more modern games have crazy unique monsters/items with visuals.
These kind of performance improvements have really cool potential for real time image/texture generation in games. I’ve already seen some games do this, but they usually rely on generating the images online.
ASCII and low graphic roguelike’s have a lot of generation freedom where they can create very unique monsters/items/etc. However a lot of this flexibility is lost as you move to more polished games that require models and art assets for everything. This is also one of the many reasons that old-styled games are still popular, is because they often offer more variety and randomization than newer titles. I think generated art assets could be a cool way to bridge the gap though, and let more modern games have crazy unique monsters/items with visuals.