Re: question: no-bitmap RAID1 with off-site drive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, 18 Nov 2012 19:11:56 -0500 starlight.2012q4@xxxxxxxxxxx wrote:

> Started playing with the third drive
> and my invented approach may not
> be correct.  That, as recommended,
> using --grow to eliminate the offsite
> "removed" drive and put back the
> rotate-in drive is right.
> 
> It looks like MD re-initializes the
> superblock (different UUID) when
> something like
> 
>   mdadm --grow --add --raid-devices=3 /dev/md3 /dev/sde
> 
> is run, even if /dev/sde has a previous
> superblock from the same array.  However
> I only tested it with a drive image that
> had not been fully synced and from the
> same (not different) array.  Running
> 
>    mdadm --re-add /dev/md3 /dev/sde
> 
> results 'mdadm' refusing the
> drive that had been from the
> same array (though not fully synced).
> 
> Appears then that 'mdadm' may not
> check the superblock to see if the
> drive came from a different array
> and should not be overwritten, and
> instead prefers to just zap it.

Correct.  When you ask mdadm to --add device to an array, that is what it
will do.  If the metadata looks particularly good it might do a re-add for
you, but if not it will just erase it and write some more.

> 
> Can anyone confirm or deny the
> possibility?  I can manually run
> an --examine and check the array
> name as a precaution when rotating
> drives if 'mdadm' doesn't perform
> the check.  Want to avoid inserting
> a offsite drive in the wrong array.
> 

We might be able to make "mdadm -I" do what you want ... find out which array
it "should" be a member of and auto-add it.
But that will currently fail for an out-of-date member with no bitmap.

If you applied
http://git.neil.brown.name/?p=mdadm.git;a=commitdiff;h=75a410f6226d1e3ef441bbb8cd4f198d5de5cf5b

and put
  policy action=force-spare
in mdadm.conf, it might work.

NeilBrown


> 
> 
> 
> At 12:26 PM 11/12/2012 -0500, starlight.2012q4@xxxxxxxxxxx wrote:
> >After some pondering, think I've figure it out.
> >
> >Best way to go is to set it up with
> >
> >  --create --level=1 --raid-devices=2
> >
> >for the initial pair of drives, then
> >
> >  --fail --remove
> >
> >the rotate-out drive, then
> >
> >  --add
> >
> >the alternate drive.  Now there will be
> >three drive slots with one "removed" and two
> >"active".
> >
> >To rotate a drive off-site
> >
> >   --fail --remove
> >
> >go to the off-site, swap the drives and on return
> >
> >   --re-add
> >
> >the rotate-in drive.
> >
> >This way the 'mdadm' UUID labels will stay on the
> >drives and 'mdadm' will warn against mistakes
> >such as trying to use the wrong drive for
> >a mirror pair.  Will always have one "removed"
> >drive.
> >
> >The
> >
> >   --grow --raid-devices=1 --force
> >
> >and
> >
> >   --grow --raid-devices=2 --add
> >
> >would be used only if a drive fails and needs to
> >be replaced.
> >
> >If anyone disagrees please advise.
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux