It is faster, leaner and translates well into Kubernetes. I also like podman Quadlets
I’ve been using podman forever; I’ve only used Docker a couple of times.
But tell me about Quadlets! I’ve never hear of them before.
What if I told you that you could define containers with systemd units?
https://www.redhat.com/en/blog/quadlet-podman
Quadlets are systemd files that define containers, networks and storage. It is the same idea as docker compose but with a daemonless model. Systemd starts your podman container workload just it does for any service. You can use systemctl commands and everything.
I love quadlets, here’s an example:
$ cat .config/containers/systemd/kavita.container [Unit] Description=Kavita manga server After=mnt-files.mount [Container] ContainerName=kavita Image=docker.io/jvmilazz0/kavita:latest AutoUpdate=registry Network=kavita.network PublishPort=5000:5000 Environment=TZ=Etc/UTC Volume=/mnt/files/books/comics:/comics:ro Volume=/mnt/files/books/gnovels:/gnovels:ro Volume=/mnt/files/books/manga:/manga:ro Volume=${HOME}/kavita:/kavita/config:Z HealthCmd=curl -fsS http://localhost:5000/api/health || exit 1 [Service] Restart=always [Install] WantedBy=default.target
$ cat .config/containers/systemd/kavita.network [Network] NetworkName=kavita Options=isolate=true # library add uses Kavita site
If you’ve dealt with systemd service files this will look familiar, with the addition of the container section.
AutoUpdate=registry
gives you automatic updates to ‘latest’ (or whatever tag you’ve set) and there’s rollbacks too, so you just have to worry about the less-critical bugs in newer versions. Personally, I feel more secure with this setup, as this box is a VPS.Network=kavita.network
- I put all my containers in different networks (with minimal privs, so many don’t have outgoing internet access), and my reverse proxy is also in all of those networks so it can do its thing.I’ve been managing my containers with service filles forever; I’ve just always built them by hand. The ones podman uses to create were screwed up in so many ways.
These look better. I think the autoupdate is something I wouldn’t use; if I do something and something stops working, I know what happened. I reality hate things that mysteriously stop working in the middle of the night.
But the network setting… Now that’s exciting. I’ve been working myself up to tighten stuff down like this, and this looks way easier.
autoupdate is something I wouldn’t use
Yup, I expect lots of people feel like that, maybe most (I’d be interested to see some stats). I value security over availability, but you can choose per-container, of course.
network
You can set
Internal=true
, which I use whenever possible, which means access is only to anything on same network (for me that’s itself and Caddy) - no outgoing connections at all. Podman uses PASTA by default for rootless.I value security over availability
So many updates are not security related, though. The rare security update isn’t worth the frequent outage IMHO.
But you’re right: giving the people that option is a good thing - as long as it’s an option.
You can set Internal=true, which I use whenever possible, which means access is only to anything on same network (for me that’s itself and Caddy) - no outgoing connections at all. Podman uses PASTA by default for rootless.
This is very timely. I have a few VPSes which I’ve locked down to the best of my non-OPs background ability: one gateway exposed to the internet, and the rest completely firewalled off and only accessible over private VPN. What I’ve recently been trying to figure out is how to lock my containers down so they only have access to the host+ports they need to. E.g., caddy is mainly a reverse proxy, except for serving static content from a RO mounted directory, but I’m at my networking knowledge limit on how to keep it from accessing local host ports. Same with the SMTP and IMAP services - SMTP of particularly challenging because I do want it to access the internet, but not access local host ports.
It’s been driving me a little nutty. It looks like this would make all that a lot easier.
True, most updates I don’t actually care about. I haven’t had any updates cause problems yet, but I like that I could choose to not enable updates on anything with a bad history (or critical stuff where I don’t want to run the risk).
Any chance you could go into more depth on your reverse proxy config? By the sounds of it you’re doing exactly what I would like to do with my services. Which reverse proxy are you using? What does your config look like? I’ve been trying to get both nginx and caddy working in the last 2 weeks and I’m REALLY struggling to get subnets working. My ideal setup would be using Tailscale and being able to follow the scheme
service.Device.tailXXXX.ts.net
. I’m struggling to find the reverse proxy config and DNS entries on my local network to get that working. I’ve seen comments saying people have done this, but none of them have shared their configs.I use Caddy (with the Cloudflare module to handle the ACME stuff) as just another container. My setup is more classic internet server stuff - it’s a VPS and all the services are internet-facing, so the DNS is via standard DNS records. Every service is on its own subdomain.
My Caddy config is pretty minimal:
$ cat caddy/Caddyfile { # Global configuration acme_dns cloudflare myapikey email mycloudflareaccount debug servers { metrics } } manga.example.com { reverse_proxy kavita:5000 } ...more containers # healthcheck target :8080 { respond 200 }
$ cat .config/containers/systemd/caddy.container [Unit] Description=Caddy reverse proxy After=local-fs.target [Container] ContainerName=caddy Image=caddycustom Network=kavita.network ...more networks PublishPort=1080:80 PublishPort=1443:443 PublishPort=1443:443/udp PublishPort=2019:2019 Volume=${HOME}/caddy/Caddyfile:/etc/caddy/Caddyfile:Z Volume=${HOME}/caddy/data:/data:Z Volume=${HOME}/caddy/config:/config:Z Volume=${HOME}/caddy/httpdocs:/var/www/httpdocs:Z HealthCmd=wget -q -t1 --spider --proxy off localhost:8080 || exit 1 [Service] Restart=always ExecReload=podman exec caddy /usr/bin/caddy reload -c /etc/caddy/Caddyfile [Install] WantedBy=multi-user.target default.target
I have a dedicated podman user (fairly restricted, no sudo, etc) that just hosts podman (i.e. the service containers and Caddy). As it’s all rootless, I use firewalld to make caddy show up on ports <1024:
firewall-cmd --add-forward-port=port=80:proto=tcp:toport=8080
. I prefer the tiny performance hit to mucking around with the privileged ports but for completeness you can do that withsysctl -w net.ipv4.ip_unprivileged_port_start=80
.I don’t specify subnets at all; I specify podman networks (one per service) and let podman handle the details.
Thanks so much! I’m only just about to make the switch to Podman, sounds like it’s going to make life a good bit simpler.
I’ve just discovered Distrobox, and it has immediately replaced my .devcontainers. The fact that it integrares into your system so well is awesome, especially since I am doing Vulkan stuff at the moment.
Haven’t really looked into shareability, though. If it’s as easy to define and share a distrobox setup than it is a docker .devcontainer, then it’s perfect.
I used to use more distrobox but I got annoyed by software dumping stuff all over my home. Now I usually build containers and then use a directory mount.