Re: A disk failure during the initial resync after create, does not always suspend the resync to start the recovery

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 26 Jun 2012 21:14:54 +0000 Ralph Berrett <ralph.berrett@xxxxxxxx>
wrote:

> A disk failure during the initial resync after create, does not always suspend the resync to start the recovery
>  
> Steps:
> 1. Create multiple Raid6 arrays (in my case 8 arrays, this is a large storage system)
> 2. Create 2 spares (with one spare in md1 and other in md5)
> 3. Set two different "spare-group" in /etc/mdamd/mdadm.conf so that md1-md4 and md5-md8 each share one of the two spare.
> 4. While resync is still in progress, fail a disk in the arrays that do not have the spare (physically pulled).
> 5. The spare drive is moved from to the effected array via mdadm --monitor daemon that is running. But the "recovery" does not always started. Most of the time it waits for the "resync" to complete before starting the "recovery", but not always.
>  
> Which is the expected behavior, should it stop the resync to do the recovery or not?  If not, since these are fairly large arrays, the "resync" could take a while before it even starts the "recovery" leaving the system in degraded state.  From my experiments, the "resync" is not stopped more often than it is stopped.
>  
> The only workaround I have found is to send "echo "ilde" > /sys/block/md2/md/sync_action" which will suspend the resync and then the recovery will start.  This is a intended as a embedded system so this is not an optimal workaround.
>  
> emDebain 6.0.4
> Kernel:    2.6.32
> mdadm:   v3.1.4 - 31st August 2010
>  

I've never given a lot of thought to this scenario so the way that it works
is simply how the different bits fall together, not anything deliberate.

If a device failed in an array which did not currently have a spare attached,
I would expect the resync to restart and when the spare gets moved over the
resync would continue and when it completes a recovery would start.

If a device failed in an array which already had a spare attached - I would
have to check the code to see what would happen but I can certainly imagine
that a recovery of that spare would start, and it may well resync the other
parity block at the same time. 

It should  be deterministic though - I can't see much room for any random
element.

As the initial sync of RAID6 isn't really needed anyway, it is clear that it
should be interrupted and the recovery preformed instead.
However if a sync is happening after an unclean restart when a device fails,
it isn't clear to me what the preferred option is.
Allowing the sync to complete means your data will be protected from another
failure sooner.
Allowing the recovery to start immediately means that you will get all your
bandwidth to the array back sooner, and you'll be protected from double
failure sooner.

Maybe if the sync is less than half way, interrupt it. If more than half way,
abort it?

The way I would  'fix' this would be to modify mdadm to write 'idle' to
'sync_action' at an appropriate time (after moving the spare over).

NeilBrown

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux