**Edit: **Thought it’s more appropriate to update an old post than start a new one! Since I seem to enjoy suffering under windows, I’ve been looking still for any possible optimisation of stable diffusion models with AMD GPUs that doesn’t require Linux. This article seems quite promising: https://community.amd.com/t5/gaming/how-to-running-optimized-automatic1111-stable-diffusion-webui-on/ba-p/625585 It’s pointing to an optimised use of DirectML as @[email protected] mentioned, but if performance is as good as claimed I would hope for more widespread adoption.

A few things have me curious though, and the more knowledgeable of you might answer faster than my trial and error attempts!

  • I understand there’s a general need to convert the model to ONNX so that it isn’t using pytorch, though the article (under section 2) makes a note about quantisation converting ‘most layers from FP32 to FP16’. I’m guessing in most cases it might not even be obvious, but wouldn’t it mean there’s an all-up reduction in quality in the model?
  • Are ONNX versions of models (like SDXL) available, so that the conversion step could be skipped entirely and just substitute the model into section 5 of the article? I assume not, huggingface pages for SD/SDXL comment on the ability to convert but I’ve only seen the .safetensor files listed.
  • Pure speculation now, would it ever be possible for A1111, to incorporate this process? I assume not if we are needing models of a specific format…

In any case, I thought it was an interesting article for some. Out of interest I may try SD.Next and see if the experience differs greatly from A1111.


Hello world, Forgive an obvious question, I’m just hoping to find out whether any specific support for AMD GPUs on windows has been confirmed? I’ve only seen it mentioned with regards to Linux specifically.

I’m sure it’s a matter of time, particularly after sdxl 1.0 is released, but would appreciate any more information now all the same. Holding out hope it’s a little smoother than the sd1.5 forks. Thankyou!-

  • jasparagus@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    This is the what I’m aware of for ROCm: AMD: Partial RDNA 3 Video Card Support Coming to Future ROCm Releases. TL;DR is that it’s still not clearly committed with a date, and consumer GPU support is pretty weak.

    There’s DirectML, which is what SD.Next (Vlad Diffusion) and some others use in Windows. I think it works OK, but can be slow, and it seemed to have a lot of bugs and limited support from perusing the issues lists (though I could be wrong there). I haven’t tried it, so others may know better. For perspective, I analyzed the public Vladmandic SD benchmark data and saw 0 7900 XT(X) results using Windows. It seems like almost nobody uses windows + AMD.

    • xenlaOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      Thankyou 👍, hopefully community solutions will crop up soon enough. Personally I’m by no means an expert user, I only recently started to play with sd1.5 locally. I have a 6800xt and while it does work with a fork, as you say the experience is quite flaky (inpainting did not work at all without playing with the startup commands, which in turn seems to have caused issues with clearing the vram), and not nearly as fast as what NVIDIA card seem to deliver. Still I was hopeful there would be more adoption of amd cards as a recent driver update mentioned “significant” improvement in directML performance for the 7900xt, just have to wait and see I suppose!

      • jasparagus@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Thanks for reporting on that! It’s honestly rare to hear anyone using one, so real-world info is sparse haha. I was seriously considering an RTX 7900 series, but skipped it because of reading a few scattered experiences like yours. Maybe someday I’ll switch to Linux haha.

    • xenlaOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Interesting, thanks for the link!

    • xenlaOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      😀 I’m also on that fork, just noticed v1.5.0 states SDXL support. Thanks for the update!