I’m thinking about starting a self hosting setup, and my first thought was to install k8s (k3s probably) and containerise everything.

But I see most people on here seem to recommend virtualizing everything with proxmox.

What are the benefits of using VMs/proxmox over containers/k8s?

Or really I’m more interested in the reverse, are there reasons not to just run everything with k8s as the base layer? Since it’s more relevant to my actual job, I’d lean towards ramping up on k8s unless there’s a compelling reason not to.

  • @[email protected]
    link
    fedilink
    English
    231 year ago

    Unless you have multiple systems, I don’t think k8s will yield much benefit over plain docker.

    • ArmoredGoat
      link
      fedilink
      English
      51 year ago

      So, if I plan to build a pi cluster I should get familiar with k8s?

      • @[email protected]
        link
        fedilink
        English
        111 year ago

        The basics can be useful there. The whole idea with k8s is to be able to run applications across multiple hosts in a given fleet. Your cluster can be that fleet! :)

        • @[email protected]
          link
          fedilink
          English
          101 year ago

          Also k8s is in high demand in the sector, so those are good skills that could be turned into $$

          • @[email protected]
            link
            fedilink
            English
            31 year ago

            I get why too. I’m a full stack (including devops) software engineer, and docker/k8s is just completely opaque to me. I’m not sure why, but I really just can’t wrap my head around it. Thankfully my current company has a devops team that takes care of it, but jeez

            • @[email protected]
              link
              fedilink
              English
              21 year ago

              Tbh those stuff aren’t really intuitive. But, as was my case for instance, that’s something that can be “easily” learnt as a hobbyist like us. And when you understand those concepts, at least from an abstract point, my stance is that you can become a better dev/ops/sys :) I strongly advice anyone in the field to at least play a little with Docker/containers to grasp what it is.

      • @[email protected]
        link
        fedilink
        English
        41 year ago

        I’m running a 3 pi cluster with k3s at the moment. The main benefit I’ve found is that all my pis run exactly the same software setup as a base so it’s easy to add new ones or replace/update one. I use a deployment management application to push my deployments too which means it’s super easy to redeploy everything if something goes funky.

      • @[email protected]
        link
        fedilink
        English
        11 year ago

        That can be fun. The benefit of kubernetes is flexibility in the orchestration and (sometimes) scaling. Also the tooling in Kubernetes is more sofisticated compared to plain containers or manual services.

        Kubernetes is basically just a finite-state machine that is able to manage a certain number of nodes as a pool of resources. This has added complexity compared to you managing the scheduling (I.e. I install this service on this box and this on this other box), but it also allows for much easier automation.

    • @[email protected]
      link
      fedilink
      English
      11 year ago

      A multitude of things are far easier to do on Kubernetes. If you combine it with an immutable OS, then less effort too.

  • Brad Ganley
    link
    fedilink
    English
    141 year ago

    I, personally, haven’t done a whole lot of VM work but I do run a metric ass-ton of containers. I can spool up servers in docker compose on absolutely dogshit hardware and have it run serviceably. Also, the immutability of the container OS is really nice for moving things around and/or getting them set up quickly.

    • @[email protected]
      link
      fedilink
      English
      31 year ago

      Where did you learn so much about Docker? Having a server at home, I’m more inclined to spin up a VM. I would like to learn more about Docker.

      • Brad Ganley
        link
        fedilink
        English
        71 year ago

        If I’m honest, I’ve stumbled nose-first through pretty much everything I know. I am never afraid to break things as long as I learn from it.

      • @[email protected]
        link
        fedilink
        English
        2
        edit-2
        1 year ago

        Just get started somewhere. I ran traditional VMs for most things before and I would never go back unless it was necessary for something.

        Easiest way is just to start using Docker for some service you’re hosting that has a public image available and go from there. If you want a more visual approach there’s stuff like Portainer you can use too.

        Also get started early on with docker compose, it makes it much easier to organize your container configs.

  • @[email protected]
    link
    fedilink
    English
    13
    edit-2
    1 year ago

    VMs are often imperative and can be quite easy and familiar to setup for most people, but can be harder or more time-consuming to reproduce, depending on the type of update or error to be fixed. They have their own kernel and can have window managers and graphical interfaces, and can therefore also be a bit resource heavy.

    Containers are declarative and are quite easy to reproduce, but can be harder to setup, as you’ll have to work by trial-and-error from the CLI. They also run on your computers kernel and can be extremely slimmed down.

    They are both powerful, depends how you want to maintain and interface with them, how resource efficient you want them to be, and how much you’re willing to learn if necessary.

    • Spiritreader
      link
      fedilink
      81 year ago

      That sums it up really well.

      I generally tend to try to use containers for everything and only branch out to VMs if it doesn’t work or I need more separation.

      This is my general recommendation as containers are easier to set up and in my opinion individual software packages are easier to maintain with things like compose. I have limited time for my self hosted instance and that took away a lot of work, especially when updating.

    • Spiritreader
      link
      fedilink
      21 year ago

      That sums it up really well.

      I generally tend to try to use containers for everything and only branch out to VMs if it doesn’t work or I need more separation.

      This is my general recommendation as containers are easier to set up and in my opinion individual software packages are easier to maintain with things like compose. I have limited time for my self hosted instance and that took away a lot of work, especially when updating.

  • @[email protected]
    link
    fedilink
    English
    121 year ago

    What I did is install proxmox on the bare metal, setup a vm in which I put the containers.

    Proxmox itself stays (almost) completely stock. The only changes I’ve made to it were to add the NUT client package so it could gracefully shut down if my NUT server indicates that the UPS is running out of power during an outage.

    In your VMs you can do whatever. Setup OMV, or a stock Ubuntu or Debian vm and install your services on the VM or use Docker/Podman. Setup Fedora CoreOS or IoT vms and host all your services in Podman containers.

    The great thing about Proxmox is you can do snapshot backups which take mere moments to complete. Then pass those off to a NAS where they can survive a irreparable loss of your Proxmox server.

    You can also spin up new vms as needed to just try to fuck around with new techs or just a new way of setting up your home lab. It gives you a ton of flexibility and makes backing stuff up way easier.

    Another great thing you can do is if 3 years down the line you are looking to replace your server hardware with some newer or more powerful stuff you can just add the new device as a node to the cluster. Then you can migrate all your existing VMs over to your new hardware and decommission your old one with very little to no downtime on anything.

    • @[email protected]
      link
      fedilink
      English
      41 year ago

      The great thing about Proxmox is you can do snapshot backups which take mere moments to complete. Then pass those off to a NAS where they can survive a irreparable loss of your Proxmox server.

      Hopefully you put a giant asterix by this point. You need the snapshot AND the original backup. Snapshots are only diffs and can’t survive without their base backup.

    • @[email protected]
      link
      fedilink
      English
      31 year ago

      This is my exact setup as well. Proxmox with one beefy vm dedicated just to docker and then a few other vms for non docker workloads (eg, home assistant, pihole, jelltfin). I can probably run those in docket as well, but the to worked better as vms when I set them up

      • @[email protected]
        link
        fedilink
        English
        11 year ago

        Appreciate your take on this and specifically mentioning that you have a VM for Home Assistant. That was a lightbulb moment for me as I like how easy it is to manage updates as an OS install rather than in a Docker container. If I ever get around to rebuilding my server architecture I’m definitely going to do this!

    • @[email protected]
      link
      fedilink
      English
      11 year ago

      I have a similar setup, but 2 VMs on each of my 2 servers, then on server 1, I have VM A running one test K3s node and VM B running one live (Production) K3s node with the same on server 2, so I can take one server full down for maintenance, but keep my test and live sites running. It’s way overkill, but allows me to learn about how to set up and maintain resilient systems. One day, I’ll do the same for my network :-(

  • redcalcium
    link
    fedilink
    English
    10
    edit-2
    1 year ago

    Container processes are just ordinary linux processes, so they don’t need extra overhead (cpu and ram reservation) to run, which means your machine can run more of them. If you have a machine with 32GB of ram, can probably run 15 VMs with 2GB of ram each where the actual app running inside the VM might only consume about 50% of the VM ram, or you can run them as container and they all would just consume 15GB of ram, leaving you extra to run more containers. I found this to be ideal for self hosting because all apps are your personal apps so interprocess isolation is not as important compared to running in public cloud.

    • lemmyvore
      link
      fedilink
      English
      81 year ago

      I’ve always been unclear of why people choose to run VM’s. I would think you’d want to try Docker first, LXC second, and VM only in the last instance, if you need to emulate a different architecture? But if the stuff you need to run has been ported to your server’s architecture why add the overhead?

      • @[email protected]
        link
        fedilink
        English
        31 year ago

        There’s been some nasty buggery with avahi instances on containers clashing with host ones in the past

        Some programs just don’t like to run without access to parts to your system like /proc /sys and /run.

        Rather than bother with crafting bespoke permissions, non-default cgroups and elevated rights for certain containers, I’ve definitely opted for just installing a VM.

        It was always a time/functionality choice, and not one I make often - crafting the right solution is always better; but I have done it

  • Max_Power
    link
    fedilink
    English
    101 year ago

    It depends on your use case and what you are trying to achieve.

    You do not need k8s (or k3s…) to use containers though. Plain old containers could also suffice, or Docker Swarm if you need some container orchestration functionality.

    Trying to learn k8s would be a good reason to use k8s though :)

  • thegreenguy
    link
    fedilink
    91 year ago

    I personally really, really like (Docker) containers and I host most of my stuff with it, on a Raspberry Pi and on (free tier) Oracle Cloud VPS’s. I also plan to (re)install Proxmox on a spare old laptop and run some stuff in VMs on that (namely Home Assistant) and might try a NixOS server too.

    So really, use both. Use the right tool for the job. And you can also run containers in VMs and even use Ansible to configure everything with playbooks, allowing you to re-run said playbooks when things go wrong.

  • @[email protected]
    link
    fedilink
    English
    81 year ago

    Personally I always use containers unless there is a good reason to use a VM, and those reasons do exist. Sometime you want a whole, fully functional OS complete with custom kernel, in that situation a VM is a good idea, sometimes a utility only comes packaged as a VM.

    But absent of a good reason, containers are just better in the majority of cases

  • @[email protected]
    link
    fedilink
    English
    61 year ago

    Just to add my two cents: When I started out I thought I’d need a datacenter, with 10 Gig connectivity and a lot of storage. Turns out, a Raspberry Pi 4 8GB would’ve been sufficient for the things I actually use.

    My recommendation would be therefore to start minimalistic and build up according to your needs from there. Start with a Raspberry PI and Docker or use a used Micro SFF and go up from there, this advice would’ve saved me a lot of money and electricity.

  • adonis
    link
    fedilink
    61 year ago

    I use proxmox for the sole benefit of just spinning up a VM of choice without having to deal with usb-sticks, etc.

    From there I just run everything with Docker containers, via Portainer.

    • @[email protected]
      link
      fedilink
      21 year ago

      This is exactly what I do for my personal servers (except with ESXi instead of proxmox).

      You will probably want both VMs and containers, there are some things that are not well supported in containers (e.g. gitlab).

      I run a couple k8s clusters for work and the complexity is beyond what most people starting out would want, I would imagine.

      Unless you need something that has a helm chart but not docker support (e.g. gitlab) or you are really keen on learning, it can be quite a jump…

      (For gitlab I still would recommend a VM with the omnibus installer over k8s unless you are big enough to have a separate team managing your k8s clusters. It would suck to have a PV issue and lose all your data.)

  • terribleplan
    link
    fedilink
    English
    6
    edit-2
    1 year ago

    If everything you want to run makes sense to do within k8s it is perfectly reasonable to run k8s on some bare-metal OS. Some things lend themselves to certain ways of running them better than others. E.g. Home Assistant really does not like to run anywhere but a dedicated machine/VM (at least last time I looked into it).

    Regardless of k8s it may make sense to run some sort of virtualization layer just to make management easier. One panel you can use to access all of the machines in you k8s cluster from a console level can be pretty nice, and a Proxmox cluster would give you this. You can make a VM on a host that takes up basically all of the available RAM/CPU on it. Proxmox specifically has some built-in niceties with gluster (which I’ve never use, I manage gluster myself on bare metal) which could even be useful inside a k8s cluster for PVCs and the like.

    If you are willing to get weird (and experimental) look into Rancher’s Harvester it’s an HCI platform (similar to Proxmox or vSphere) that uses k8s as its base layer and even manages VMs through k8s APIs… I played with it a bit and it was really neat, but opted for bare metal Ubuntu for my lab install (and actually moved from rke2 to k3s to Nomad to docker compose with some custom management/clustering over the course of a few years).

    • @[email protected]
      link
      fedilink
      English
      31 year ago

      Fwiw I’ve been running home assistant in a docker container for a couple years without any issues

      • terribleplan
        link
        fedilink
        English
        2
        edit-2
        1 year ago

        Yeah, I think the problem comes if you don’t want to manually configure “Add-ons”. Using this feature is only supported on their OS or using “Supervised”. “Supervised” can’t itself be in a container AFAIK, only supports Debian 12, requires the use of network manager, “The operating system is dedicated to running Home Assistant Supervised”, etc, etc.

        My point is they heavily push you to use a dedicated machine for HASS.

        • [email protected]
          link
          fedilink
          English
          11 year ago

          Yea I’ve been running “core” in docker-compose and not the “supervised” or whatever that’s called.
          It’s been pretty flawless tbh.
          It’s running in docker-compose in a VM in proxmox.
          At first, it was mostly because I wanted to avoid their implementation of DNS, which was breaking my split-horizon DNS.

          Honestly, once you figure out docker-compose, it’s much easier to manage than the supervised add-on thing. Although the learning curve is different.
          Just the fact that your add-ons don’t need to go down when you upgrade hass makes this much easier.

          I could technically run non-hass related containers in that docker, but the other important stuff is already in lxc containers in proxmox.
          Not everything works in containers, so having the option to spin a VM is neat.

          I’m also using PCI passthrough so my home theater/gaming VM has access to the GPU and I need a VM for that.

          Even if they only want to use k8s or dockers for now, having the option to create a VM is really convenient.

  • zzz
    link
    fedilink
    51 year ago

    My backup solution is rsync and so I really like docker-compose since it usually means there is zero config for restoration of backups on a new computer besides installing docker-compose (which is usually one line on the terminal).

    • @[email protected]
      link
      fedilink
      English
      21 year ago

      I am with you. So easy to use. I have Dietpi running as lightweight OS on my VMs for when i don’t want LXCs.

  • @[email protected]
    link
    fedilink
    English
    41 year ago

    Why not both?

    Like many others here, I went with Proxmox as the base host. But most of my services are Docker containers , running in a “dockerVM” on top of Proxmox.

    Having Proxmox as the base is just so flexible, which is very handy for a homelab.

    • For instance I set up a VM with Wireguard back when Wireguard had only just been merged in to the mainline kernel, without affecting the other
    • You can have separate VM for docker testing, and docker production
    • You can run multiple VMs for multiple Kubernetes hosts, to try it out and get your feet wet without affecting the “production” containers
    • If you get additional servers, you can just migrate those Kubernetes VMs
    • You can run Windows VM should you need, and BSD (and thus pfSense/opensense or TRUE AS)
    • You can run a full graphical environment if you want
    • Proxmox has easy setup for firewalls for each VM
    • I have a VM running a legacy bare metal system (from the same server now running proxmox) that I’ve been slowly de-commissioning piece by piece
    • @[email protected]
      link
      fedilink
      English
      11 year ago

      What is your system backup solution like? Having it separated seems convenient for that since you can just back up the vm storage somewhere I’m guessing?

      • @[email protected]
        link
        fedilink
        English
        11 year ago

        Proxmox Backup Server: Incremental de-duplicateed image backups of the whole VM, with possibility of individual file restore. It’s like magic

        For the legacy bare metal system I have rsnapshots of the data folder (set it up ages ago, and never changed it)

        An nginx LXC container has a single static backup of the container, with the nginx config file stored in a git repo

      • mr47
        link
        fedilink
        11 year ago

        Not OP, but similar setup (Proxmox with docker on a VM). The VM (plus a few LXCs) are backed up daily using the backup built into Proxmox, and those backups are mirrored to the cloud with rclone.