Re: Strange RAID-5 rebuild problem

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On Mar 2, 2008, at 13:29, Robin Hill wrote:

That depends on the behaviour of the RAID system (and I've not dug
through the code to check on this). Realistically this situation is no different to writing to the array while it's rebuilding - in either case the safe thing to do is read from the (n-1) known good disks, and write
to all the (n) disks in the array (i.e. never just do any fast XORing
with the parity block).  Any data written before the offset will still
be okay, and any data written after the offset will get recalculated
(which wastes a bit of time), but will still be valid.

Yes you are absolutely right, it was too early for me in the morning, should have given it a little bit more thought before sending out a "woa doesn't this corrupt your data message" sorry for that. Basically a non syncing degraded array is just a very sloooooooow syncing array. :)

This still does not explain why the automatic resync is not triggered on the first write if start_ro is set to 1, though. I had a quick look at the code, but it will take some more time to find my way around.

Fact is, if start_ro = 1 and the RAID is still in (auto-read-only) sending mdadm -w /dev/... makes it writeable and triggers the resync. Just writing to the array sets the array to writeable but does not trigger the resync. Of course then mdadm -w does not work either since it is already busy and in write mode. So sending -w and writing to it to switch it to writeable seems to be handled differently.


Kind regards,
Michael

Attachment: smime.p7s
Description: S/MIME cryptographic signature


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux