Re: mdadm 3.3 fails to kick out non fresh disk

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Martin,

On Fri, Sep 20, 2013 at 8:07 PM, Martin Wilck <mwilck@xxxxxxxx> wrote:
> On 09/20/2013 10:56 AM, Francis Moreau wrote:
>> Hello Martin,
>>
>> On Mon, Sep 16, 2013 at 7:04 PM, Martin Wilck <mwilck@xxxxxxxx> wrote:
>>> On 09/16/2013 03:56 PM, Francis Moreau wrote:
>>>
>>>> I did give your patch "DDF: compare_super_ddf: fix sequence number
>>>> check" a try and now mdadm is able to detect a difference between the
>>>> 2 disks. Therefore it refuses to insert the second disk which is
>>>> better.
>>>>
>>>> However it's still not able to detect which version is the "fresher"
>>>> like mdadm does with soft RAID1 (metadata 1.2). Therefore mdadm is not
>>>> able to kick out the first disk if it's the outdated one.
>>>>
>>>> Is that expected ?
>>>
>>> At the moment, yes. This needs work.
>>>
>>
>> Actually this is worse than I thought: with your patch applied mdadm
>> refuses to add back a spare disk into a degraded DDF array.
>>
>> For example on a DDF array:
>>
>> # cat /proc/mdstat
>> Personalities : [raid1]
>> md126 : active raid1 sdb[1] sda[0]
>>       2064384 blocks super external:/md127/0 [2/2] [UU]
>>
>> md127 : inactive sdb[1](S) sda[0](S)
>>       65536 blocks super external:ddf
>>
>> unused devices: <none>
>>
>> # mdadm /dev/md126 --fail sdb
>> [   24.118434] md/raid1:md126: Disk failure on sdb, disabling device.
>> [   24.118437] md/raid1:md126: Operation continuing on 1 devices.
>> mdadm: set sdb faulty in /dev/md126
>>
>> # mdadm /dev/md127 --remove sdb
>> mdadm: hot removed sdb from /dev/md127
>>
>> # mdadm /dev/md127 --add /dev/sdb
>> mdadm: added /dev/sdb
>>
>> # cat /proc/mdstat
>> Personalities : [raid1]
>> md126 : active raid1 sda[0]
>>       2064384 blocks super external:/md127/0 [2/1] [U_]
>>
>> md127 : inactive sdb[1](S) sda[0](S)
>>       65536 blocks super external:ddf
>>
>> unused devices: <none>
>>
>>
>> As you can see the reinserted disk sdb sits as spare and isn't added
>> back to the array.
>
> That's correct. You marked that disk failed.
>
>> Is it possible to add this major feature work again and keep your improvement ?
>
> No. A failed disk can't be added again without rebuild. I am positive
> about that.
>

Hmm that's not the case with soft linux RAID AFAICS: doing the same
thing with soft RAID and the reinserted disk is added to the raid
array and it's synchronised automatically. You can try it easily.

Could you show me the mdadm command I should use to insert sdb into the array ?

Thanks.
-- 
Francis
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux