I’m in the process of wiring a home before moving in and getting excited about running 10g from my server to the computer. Then I see 25g gear isn’t that much more expensive so I might was well run at least one fiber line. But what kind of three node ceph monster will it take to make use of any of this bandwidth (plus run all my Proxmox VMs and LXCs in HA) and how much heat will I have to deal with. What’s your experience with high speed homelab NAS builds and the electric bill shock that comes later? Epyc 7002 series looks perfect but seems to idle high.

    • wreckedcarzz@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      6 months ago

      I just moved my home assistant docker container to a new-to-me Xeon system. It also runs a couple basically idle tasks/containers, so I threw BOINC at it to put it to good use. All wrapped up with Debian 12 on proxmox…

      (I needed USB support for zigbee in ha, and synology yanked driver support from dsm with the latest major version, so ‘let’s just use the new machine’…)

    • johnnixon@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 months ago

      I looked at Epyc because I wanted to bandwidth to run u.2 drives at full speed and it wasn’t until Epyc or Threadripper that you could get much more than 40 lanes in a single socket. I’ve got to find another way to saturate 10g and give up on 25g. My home automation is run on a Home Assistant Yellow and works perfectly, for what it does.

      • just_another_person@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        edit-2
        6 months ago

        Some unsolicited advice then: don’t go LOOKING for reasons to use the absolute max of what your hardware is capable of just because you can. You just end up spending more money 🤑

        For real though, just get an N100 or something that does what you need. You don’t need to waste money and power on an Epyc if it just sits idle 99% of the time.

        • johnnixon@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 months ago

          What I need is a 10g storage for my Adobe suite that I can access from my MacBook. I need redundant, fault tolerant storage for my precious data. I need my self hosted services to be high availability. What’s the minimum spec to reach that? I started on the u.2 path when I saw enterprise u.2 drives at similar cost per GB as SATA SSDs but faster and crazy endurance. And when my kid wants to run a Minecraft server with mods for him and his friends, I better have some spare CPU cycles and RAM to keep up.

          • MangoPenguin@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            6 months ago

            You could technically do that from like 2x ~$150 used business desktop PCs off ebay, 10th gen Intel CPU models or around there with Core i3/i5 CPUs.

            Throw some M.2 SSDs in each one in a mirror array for storage, add a bit of additional RAM if needed and a 10G NIC. Would probably use about 30-40W total for both of them.

            Minecraft servers are easy to run, they don’t need much especially on a fairly modern CPU with high single thread performance, and only use maybe 6GB of RAM for a modded one.

            You’re not asking for a whole lot out of the hardware, so you could do it cheap if you wanted to.

          • just_another_person@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            2
            ·
            edit-2
            6 months ago

            Get a Drobo if you’re that worried about that kind of access then. Make it simple.

            Otherwise anything with two NICs is the same thing.