Hello,

I am going to upgrade my server, taking advantage of the fact that I am going to be able to put more hard disks, I wanted to take advantage of this to give a little more security (against loss) to my data.

Currently I have 2 hard drives in ext4 with information, and wanted to buy a third (same capacity all three) and place them in raid5, so that in the future, I can put more hard drives and increase the capacity.

Due to economic issues, right now I can only buy what would be the third disk, so it is impossible for me to back up the data I currently have.

The data itself is not valuable, in case any file gets corrupted, I could download it again, however there are enough teras (20) to make downloading everything a madness.

In principle I thought to put on this server (PC) a dietpi, a trimmed debian and maybe with mdadm make the raid. I have seen tutorials on how to do it (this for example https://ruan.dev/blog/2022/06/29/create-a-raid5-array-with-mdadm-on-linux ).

The question is, is there any way without having to format the hard drives with data?

Thank you and sorry for any mistakes I may make, English is not my mother language.

EDIT:

Thanks for yours answers!! I have several paths to investigate.

  • neidu2@feddit.nl
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    6 months ago

    Seconding this. For starters, when tempted to go for Raid5, go for Raid6 instead. I’ve had drives fail in Raid5, and in turn have a second failure during the increased I/O associated with replacing a failed drive.

    And yes, setting up RAID wipes the drives. Is the data private? If not, a friendly datahoarder might help you out with temporary storage.

    • BearOfaTime@lemm.ee
      link
      fedilink
      English
      arrow-up
      5
      ·
      6 months ago

      I run RAID5 on one device… BUT only because it replicates data that’s on 2 other local devices AND that data is backed up to a cloud storage.

      And I still want it to be RAID 6.

      • neidu2@feddit.nl
        link
        fedilink
        English
        arrow-up
        7
        ·
        edit-2
        6 months ago

        Story time!

        In this one production cluster at work (1.2PB across four machines, 36 drives per machine) everything was Raid6, except ONE single volume on one of the machines that was incorrectly set up as Raid5. It wasn’t that worrysome, as the data was also stored with redundancy across the machines in the storage cluster itself (a nice functionality of beegfs), but it annoyed the fuck out of me for the longest time.

        There was some other minor deferred maintenance as well which necessitated a complete wipe, but there was no real opportunity to do this and rebuild that particular RAID volume properly until last spring before the system was shipped off to Singapore to be mobilized for a survey. I planned on getting it done before the system was shipped, so I backed up what little remained after almost clearing it all out, nuked the cluster, disassembled the raid5, and then started setting up everything from scratch. Piece of cake, right?

        shit

        That’s when I learned how much time it actually takes to rebuild a volume of 12 disks, 10TB each. I let it run as long as I could before it had to be packed up. After half a year of slow shipping it finally arrived on the other side of the planet, so I booked my plane ticket and showed up a week before anyone else just so I could connect power and continue the reraiding before the rest of the crew showed up. Basically, pushing a few buttons, followed by a week of sitting at various cafes drinking beer. Once the reraid was done, reclustering was done in less than an hour, and restoring the folder structure backup was a few hours on top of that. Not the worst work trip I’ve had, except from some unexpected and unrelated hardware failures, but that’s a story for another day.

        Fun fact: While preparing the system for shipment here in Europe, I lost one of my Jabra bluetooth buds. I searched fucking everywhere for hours, but gave up on finding it. I found it half a year later in Singapore, on top of the server rack, surprised it hadn’t even rolled down. It really speaks to how little these huge container ships roll.

        • BearOfaTime@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          6 months ago

          Haha, everything about that story is awesome, right down to the lost and found Jabra ear bud (does Jabra exist any more? At one time their ear pieces were the best).

          Yes, re-silvering takes fucking forever. Even with my little setups (a few TB), it can take a day or two to rebuild one drive in an array. One.

          I can only imagine how long a PB array would take.

          • neidu2@feddit.nl
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            6 months ago

            Jabra still exists yes. I’m still using Jabra, although I’m using a pair that I bought after I thought that one earbud was gone forever. I still use the older ones, which was Jabra Elite 4, but only with my PC, as its battery took a hit after those 6 months at sea. I currently main Jabra Active 7 or something like that, and I quite like them. I noticed that the cover doesn’t stay very attached after a few proper cleans, but nothing a drop of glue doesn’t fix. What I really like about the ones I currently use is that they’re supposedly built to withstand sweat while training. I don’t work out, but it would seem that those who do sweat A LOT, as I can wear mine while showering without any issues.

            As for resilvering, the RAIDs are only a small fraction each of the complete storage cluster. I don’t remember their exact sizes, but each raid volume is 12 drives of 10TB each. Each machine has three of these volumes. Four machines total contributes all of its raid volumes to the storage cluster for 1.2PB of redundant storage (although I’m tempted to drop the beegfs redundancy, as we could use the extra space, and it’s usually fairly hassle free to swap in a new server and move the drives over).

            EDIT: I just realized that I have this Jabra confference call speaker attached to the laptop on which I’m currently typing. I mostly use it for discord while playing project zomboid with my friends, though. I run audio output elsewhere, as the jabra is mono only.

      • LoboAureo@lemm.eeOP
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 months ago

        If i goes to raid5 i lost one disk of space, to go to raid6 i have to lost 2 disks.

        Its a pesonal proyect, and the motherboard has only 6satas, one of them used by the SO disk, and i want to be able of upgrade it in a future…

        • malaknight
          link
          fedilink
          English
          arrow-up
          3
          ·
          6 months ago

          Not to speak for the person above you. But I believe they are saying they have 1 computer with a raid5 array, that backs up to two different local servers, and then at least 1 of those 3 servers backs up to a cloud provider.

          If that is true then they are doing it correctly. It is highly recommended to follow a 3-2-1 storage solution, where you back up to a local backup and a cloud backup for redundancy.

            • BearOfaTime@lemm.ee
              link
              fedilink
              English
              arrow-up
              2
              ·
              6 months ago

              Lol, sorry, I really tried to make it clear what I was doing, honest, I did! 😄

              Yes, I have 3 local devices that replicate to each other, one is RAID5, (well, 2 are, but…not for long). And one of them also does backup to a cloud storage.

              Not ideal, because 3 devices are colocated, but it’s what I can do right now. I’m working on a backup solution to include friends and family locations (looking to replicate what Crashplan used to provide in their “backup to friends” solution).