• semi [he/him]@lemmy.ml
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    2 months ago

    For inference (running previously-trained models that need lots of RAM), the desktop could be useful, but I would be surprised if training anything bigger than toy examples on this hardware would make sense because I expect compute performance to be limited.

    Does anyone here have practical recent experience with ROCm and how it compares with the far-more-dominant CUDA? I would imagine that compatibility is much better now that most models are using PyTorch and that is supported, but what is the performance compared to a dedicated Nvidia GPU?