Re: RAID5: *all* of my drives are now spares

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



NeilBrown <neilb@xxxxxxx> wrote:
> On Tue, October 27, 2009 12:45 pm, John P Poet wrote:
>> md3 : inactive sdk[0](S) sdf[1](S) sdg[2](S) sdj[4](S)
>>       7814056960 blocks super 1.2
> However the --examine shows that 3 of the 4 devices report a role
> of 'spare'... which is odd because "--assemble --force" managed to

Hmmm, it's not. Yes, it's odd but not unusual for v1 metadata :)
Please have a look at the thread starting with:

From: John Hughes <john@xxxxxxxxx
Subject: Dumb questions about mdadm #1 - replacing broken disks - "slot" reuse?
Date: Fri, 18 Sep 2009 14:13:44 +0200
Message-ID: <4AB37978.5030403@xxxxxxxxx>

John and me reported something that looks quite similar to me for v1
metadata already (and the OP runs v1.2 metadata).


regards
   Mario
-- 
Ho ho ho! I am Santa Claus of Borg. Nice assimilation all together!

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux