Re: raid failure and LVM volume group availability

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Neil Brown <neilb@xxxxxxx> writes:

> On Tuesday May 26, goswin-v-b@xxxxxx wrote:
>> hank peng <pengxihan@xxxxxxxxx> writes:
>> 
>> > Only one of disks in this RAID1failed, it should continue to work with
>> > degraded state.
>> > Why LVM complained with I/O errors??
>> 
>> That is because the last drive in a raid1 can not fail:
>> 
>> md9 : active raid1 ram1[1] ram0[2](F)
>>       65472 blocks [2/1] [_U]
>> 
>> # mdadm --fail /dev/md9 /dev/ram1
>> mdadm: set /dev/ram1 faulty in /dev/md9
>> 
>> md9 : active raid1 ram1[1] ram0[2](F)
>>       65472 blocks [2/1] [_U]
>> 
>> See, still marked working.
>> 
>> MfG
>>         Goswin
>> 
>> PS: Why doesn't mdadm or kernel give a message about not failing?
>
> -ENOPATCH :-)
>
> You would want to rate limit any such message from the kernel, but it
> might make sense to have it.
>
> NeilBrown

No rate risk in mdadm --fail reporting a failure to fail the device.

MfG
        Goswin
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux