Re: Feature request: Add flag for assuming a new clean drive completely dirty when adding to a degraded raid5 array in order to increase the speed of the array rebuild

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 09/01/2022 14:21, Jaromír Cápík wrote:
In case of huge arrays (48TB in my case) the array rebuild takes a couple of
days with the current approach even when the array is idle and during that
time any of the drives could fail causing a fatal data loss.

Does it make at least a bit of sense or my understanding and assumptions
are wrong?

It does make sense, but have you read the code to see if it already does it?

And if it doesn't, someone's going to have to write it, in which case it doesn't make sense, not to have that as the default.

Bear in mind that rebuilding the array with a new drive is completely different logic to doing an integrity check, so will need its own code, so I expect it already works that way.

I think you've got two choices. Firstly, raid or not, you should have backups! Raid is for high-availability, not for keeping your data safe! And secondly, go raid-6 which gives you that bit extra redundancy.

Cheers,
Wol



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux