• TimeSquirrel@kbin.social
    link
    fedilink
    arrow-up
    6
    arrow-down
    3
    ·
    edit-2
    1 year ago

    Isn’t there some kind of diminishing returns on this, where it starts to make more sense to offload things to a GPU or something instead of piling on ever more CPU cores? There has to be a lot of inefficiencies in that many interconnects.

    • AggressivelyPassive@feddit.de
      link
      fedilink
      English
      arrow-up
      21
      ·
      1 year ago

      GPUs aren’t really suitable for many workloads. These CPUs are typically used in servers, you can’t really offload a docker container onto a GPU.

    • hamsterkill@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      16
      ·
      1 year ago

      This is the type of processor companies want in things like VM servers that host large numbers of VMs.

      GPU processing units are really good at only specific kinds of computation. These are still all-around processors.

    • _s10e@feddit.de
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 year ago

      The alternative to multiple cores is a single core that runs faster. We tried this and hit a limit. So, it’s many cores, now.

    • namingthingsiseasy
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      GPUs are still pretty bad at handling conditional logic and are more optimized towards doing mathematical operations instead.

      But you are right in the sense that people are exploring different kinds of hardware for workloads that are getting increasingly specific. We’re not in a CPU vs GPU world anymore, but more like a “what kind of CPU do I need?” situation.

    • Blackmist@feddit.uk
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      One of their benchmark graphs is for Stable Diffusion, showing how much faster their CPU runs it than a 96 core AMD Epyc CPU. I’m like 99% sure that a GPU would run that at least 10 times faster.