Re: mdadm: failed devices become spares!

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Pierre Vignéras wrote:
> And the next question is: how to activate those 2 spare drives? I was 
> expecting mdadm to use them automagically.
>   

If you want to experiment with different ways of getting the data back,
but without risking writing anything to the drives, you could do this:

1. Use dmsetup to create copy-on-write "virtual drives" which
"see-through" to the content of your real drives, but don't risk writing
anything at all to them.

2. Use mdadm --create --assume-clean ...blahblah...
/dev/mapper/cow_drive_1  .....

to force mdadm to put the array back together the way you think it was
(the output of examine will be useful here).  You'll need to specify (at
least - from memory):

. stripe size
. metadata version (this affects metadata location on the drives)
. correct device order (with or without a single failed drive)


... after that you can run a read-only (or read-write) check on the COW
md partition to verify that you've got your data back, then mount it
read-only etc.  Once you're happy that your commands are going to get
things running again, you can run them "for real" on the non-COW devices.

See the recent list archives for my post on using a similar set of
commands for HW RAID data forensics, along with references....

HTH,

Tim.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux