Re: Parity distribution when adding disks to md-raid6

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 08/01/2023 20:13, boscabeag wrote:
I'm replacing the drives in my home NAS and can't fully populate it all at one.  I can start with 4 drives, and create a raid 6 array.  When I add drives #5 and #6, will the parity chunks remain on the first four devices or will the resync process distribute them over the entire array, just like it would if I created the array with all devices?  Is there anything I should do (with mdadm) to an array after adding devices that's not just at the md level not the fs level?

Does that mean your NAS takes six drives in total, and you can only replace a couple at a time?

I think you need to give us rather more detail if you want better advice, but I'm *GUESSING* you're trying to move from a non-raid setup. I'm trying to think of a way that will avoid hammering the disks, but at the end of the day if they're newish then it really doesn't matter.

Okay. I'll assume you have two drives, you want to merge them in a raid, and you've bought four more drives to make a big raid and double your storage.

First things first. Your new four-drive raid-6 will be SMALLER than your two original drives combined. If they aren't full, that won't matter, as the difference isn't much, but you need to be aware of it.

And if you're going from two non-raided disks, I'd be inclined to go via raid 5. it'll avoid hammering the disks quite so much, and you're not losing redundancy you didn't have ... just try and make sure you have a backup (which you should have, anyway).

Okay. So we create your 4-disk raid - 5 or 6. Once that's done, what I'd do is take whichever disk is the fullest, and simply dd it onto the raid device. It might be a lot of disk, but it's a straightforward stream and shouldn't hammer the system too much. Once complete, your raid partition will be a copy of the disk partition.

Now grow the filesystem, and just copy the second disk across. Hopefully, you've got a simple clean setup (unlike mine, with hard links everywhere...)

Now I guess you're adding the existing disks into the array. Just add and resize the array in one hit, upgrading from raid 5 to 6 if appropriate at the same time. The rebuild will splurge the data and parity across all the drives - that's what raid 5 or 6 is - so that's the answer to that question of yours. The reason I suggested starting with a raid 5 is the calculation of the first parity is simple, calculating the second is more complicated so you've deferred that cost until this point.

Lastly, because you've grown the array to 4 data disks, the array size is less than the space available. You need to increase the size of the array, and then the filesystem (assuming you want to).

Personally, if I was doing this, I'd've stuck lvm on the raid, and then kept the two disks as separate lvm partitions. That would mean that I'd had to go the raid-5 route, as there just wouldn't be enough space on the raid otherwise. And then you've got roughly half your new disk space spare to grow your partitions, take snapshots, whatever, all the stuff lvm makes possible.

Hopefully this has given you a bunch of ideas. And made it clear why we need more detail if you want good advice - pretty much all of this is guesswork as to what you're trying to do ...

If you haven't found this already ...
https://raid.wiki.kernel.org/index.php/Linux_Raid
and especially
https://raid.wiki.kernel.org/index.php/Converting_an_existing_system

Cheers,
Wol



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux