Re: Feature request: Add flag for assuming a new clean drive completely dirty when adding to a degraded raid5 array in order to increase the speed of the array rebuild

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Roger.

>That is typically 100MB/sec per disk as it is reported, and that is a
>typical speed I have seen for a rebuild and/or grow.
>
>There are almost certainly algorithm sync points that constrain the
>speed less than full streaming speed of all disks.
>
>The algorithm may well be, read the stripe, process the stripe and
>write out the new stripe, and start over (in a linear manner)  I would
>expect that to be the easiest to keep track of, and that would roughly
>get your speed (costs a read to each old disk + a write to the new
>disk + bookkeeping writes + parity calc).     Setting up the code such
>that it overlaps the operations is going to complicate the code, and
>as such was likely not done.

Yeah, I'm pretty sure the current behavior is suboptimal just because
it was easier for the implementation. And ... surprisingly ... this
feature request is my clumsy try to convince someone amazing and clever
to change that, because ... we love Linux and wanna se it rock! Right? :D


>And regardless of the client's only being able to run raid5, there is
>significant risks to running raid5.   If on the rebuild you find a bad
>block on one of the other disks then you have lost data, and that is
>very likely to happen--that exact failure was the first raid failure I
>saw 28+ years ago).

I'm aware of the risks ... but, losing a file or two is still much better
than losing the whole array just because of the low sync speed when you
need to operate the array in a degraded mode for 3 days instead of 1 day.
Making it faster seems to me quite important / reasonable.


>How often are you replacing/rebuilding the disks and why?

Few times a year, different reasons. Usually requests for higher capacity
where I need to replace all drives one by one and then grow the array.
Sometimes reallocated sectors appear in the SMART output and I never
let such drives in the array considering them unreliable. The --replace
feature is nice, but often there's no room for one more drive in the
chasis and going that way requires an external USB3 rack and a bit of
magic if the operation cannot be done offline.

So, I still hope someone will find enough courage one day to implement
the new optional sync strategy :)

BR, J.




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux