Hey,

I am planning to implement authenticated boot inspired from Pid Eins’ blog. I’ll be using pam mount for /home/user. I need to check integrity of all partitions.

I have been using luks+ext4 till now. I am hesistant hesitant to switch to zfs/btrfs, afraid I might fuck up. A while back I accidently purged ‘/’ trying out timeshift which was my fault.

Should I use zfs/btrfs for /home/user? As for root, I’m considering luks+(zfs/btrfs) to be restorable to blank state.

  • @[email protected]
    link
    fedilink
    English
    65 months ago

    hesitant to switch to zfs/btrfs, afraid I might fuck up.

    is your backup strategy not good for you or is it so much work to repeat the setup?

  • @[email protected]
    link
    fedilink
    English
    55 months ago

    I’ve been using luks on btrfs for a couple years now with little issue. I’m not using the RAID features of BTRFS though. I’m using it for subvolumes and snapshots.

    I personally like Timeshift as my snapshot utility simply because I kinda grok both its GUI and CLI interfaces. It’s saved my bacon a few times over. I like rolling release-type distros, so it handles the occasional bad update gracefully. I’ve heard folks say good things about Snapper, though.

    • projectmoon
      link
      fedilink
      English
      15 months ago

      Do you use timeshift to back up data, or only system configuration?

      • @[email protected]
        link
        fedilink
        English
        25 months ago

        system config and system data are in my root subvolume, home directory, dotfiles, and some data that I want to be accessed at SSD speed are in my home subvolume. This all gets timeshift backup/snapshots. The rest of my data is located on spinning platter sata drives, which is backed up regularly using a different method (weekly rsync job that copies to a cold backup drive.)

    • unhingeOP
      link
      English
      15 months ago

      I won’t be using RAID features as of now, and timeshift isn’t an issue for me. just an example of my fuckup 😅

      I’ve been using luks on btrfs for a couple years now with little issue

      What was the issue?

      • @[email protected]
        link
        fedilink
        English
        25 months ago

        The only issues I’ve had are a) learning curve using BTRFS and its associated utilities and b) difficulty differentiating snapshots. I learned REAL DAMN QUICK to give those guys descriptive comments like ‘snapshot before 2023-12-16 update’.

  • I have been using btrfs for years, and love it. I chose it over zfs mainly because I found the tooling easier and more straightforward, and concepts less complex. It’s been a long time, but I also believe btrfs was in mainline and zfs wasn’t, and having reliable access to rescue tools was important - any complexity like having to build my own rescue disks was highly undesireable. I also vaguely remember zfs needing regular maintenance back in the day, which would have influenced my decision. I’ve also liked that btrfs tools have smart defaults, such as detecting SSDs and auto-setting healthy defaults. I’m not a sysadmin, and have no interest in being one, so I value features like these.

    Anyway, I’ve had ext3 and ext4 corruption issues several times over the years, but have had no issues with any btrfs filesystems. I’ve used it on platters, SSDs, SD cards, USB sticks – except for vfat and iso9660 for specialty devices, I can’t say I’ve chosen anything other than btrfs for over a decade, and I’ve a few ext4 partitions to btrfs in fury after corruptions, repairs, and restores.

    I’m sorry I can’t compare btrfs to zfs; zfs has probably fixed the tooling warts and licensing issues that I remember from years ago, by now. Lots of people like it, so both are good choices.

  • @[email protected]
    link
    fedilink
    English
    2
    edit-2
    5 months ago

    My experience with btrfs is quite old now, but I remember being plauged with enospc errors requiring a lot of balancing to correct.

    I have been running zfs for a decade or so now on a 6 disk array and the only issue I have had was the pool not being imported on boot sometimes but that seems fixed now. I recenty replaced 2 disks in that array and the whole replace / rebuild process went quite well. I felt confident there would be no uncorrectable read errors during the rebuild because the monthly scrub had recently run. Overall I’m quite impressed with zfs.

    All that said I would never run a root filesystem with an out of tree kernel module. So I’m still using xfs on /

  • @[email protected]
    link
    fedilink
    English
    15 months ago

    Not quite what you’re asking for, but I’ve been using Bcachefs in production for nearly 18 months now on a ~120 tb pool. The tooling is great and incredibly simple to use.

  • @[email protected]
    link
    fedilink
    English
    15 months ago

    I have had no hiccups at all with an OpenSuse MicroOS server where I host game servers and a couple other self hosted applications. Btrfs is nice for snapshots and is the default for that distribution. I get daily updates and have not even once had to think about my server. I use it for sharing some of my files, reverse proxy for some web hosting, run everything on container, and use watchtower to update each container. It’s been working like a treat for years.