Re: Strange / inconsistent behavior with mdadm -I -R

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 14 Mar 2013 21:03:28 +0100 Martin Wilck <mwilck@xxxxxxxx> wrote:

> Hello Neil,
> 
> for my DDF/RAID10 work, I have been trying to figure out how mdadm -I -R
> is supposed to behave, and I have found strangeness I'd like to clarify,
> lest I make a mistake in my DDF/RAID10 code.
> 
> My test case is incremental assembly of a clean array running mdadm -I
> -R by hand for each array device in turn.
> 
> 1) native md and containers behave differently for RAID 1
> 
> Both native and container RAID 1 are started in auto-read-only mode when
> the 1st disk is added. When the 2nd disk is added, the native md
> switches to "active" and starts a recovery which finishes immediately.
> Container arrays (tested: DDF), on the other hand, do not switch to
> "active" until a write attempt is made on the array. The problem is in
> the native case: after the switch to "active", no more disks can be
> added any more ("can only add $DISK as a spare").
> 
> IMO the container behavior makes more sense and matches the man page
> better than the native behavior. Do you agree? Would it be hard to fix that?

Without -R, the array should remain inactive until all expected devices have
been found.  Then it should switch:
  to 'active' if the array is known to this host - i.e. listed
       in /etc/mdadm.conf or has the right hostname in the metadata
  to 'read-auto' if the array is not known to this host.

With -R, it only stays inactive until we have the minimum devices needed for
a functioning array.  Then it will switch to 'read-auto' or, if the array is
known and all expected devices are present, it will switch to 'active'.

I think this is correct behaviour.  However I'm not quite sure from your
description whether you are saying that it doesn't behave like this, or if
you are saying that it does but it should behave differently.

Maybe if you could provide a sequence of "mdadm" commands that produces an
outcome different to what you would expect - that would reduce the chance
that I get confused.

> 
> 2) RAID1 skips recovery for clean arrays, RAID10 does not
> 
> Native RAID 10 behaves similar to RAID1 as described above. When the
> array can be started, it does so, in auto-read-mode. When the next disk
> is added after that, recovery starts, and the array switches to
> "active", and further disks can't be added the "simple way" any more.
> There's one important difference: in the RAID 10 case, the recovery
> doesn't finish immediately. Rather, md does a full recovery of the added
> disk although it was clean. This is wrong; I have come up with a patch
> for this which I will send in a follow-up email.
> 

I agree with your assessment here and with your patch, thanks.

NeilBrown


Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux