Re: Software RAID1 Failure Help

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



On 02/08/2014 12:51 AM, John R Pierce wrote:
> On 2/7/2014 3:47 PM, Chris Stone wrote:
>> Sure, replace the bad drive and rebuild build the mirror. Or add 2 drives,
>> using one to replace the bad one and the other as a hot spare. Then if one
>> of the 2 drives in the mirror fails again, the hot spare will take over for
>> it.
>
> one thing thats always bugged me about md raid when its mirrored per
> partition...     say you have /dev/sda{1,2,3} mirrored with
> /dev/sdb{1,2,3} ... if /dev/sdb2 goes south with too many errors and
> mdraid fails it, will it know that b{1,3} are also on the same physical
> drive and should be failed, or will it wait til it gets errors on them, too?
>

I use RAID per partition, and had several disks failing. It will treat 
them separate. Only if whole disk disappears will mdadm report all of 
them failing. If only partition has failed, it will try to rebuild it at 
once.



Here is mail sent on error:

This is an automatically generated mail message from mdadm
running on vmaster.xxx

A Fail event had been detected on md device /dev/md0.

It could be related to component device /dev/sdb2.

Faithfully yours, etc.

P.S. The /proc/mdstat file currently contains the following:

Personalities : [raid1]
md0 : active raid1 sdb2[2](F) sda2[1]
       488279488 blocks [2/1] [_U]

unused devices: <none>


-- 
Ljubomir Ljubojevic
(Love is in the Air)
PL Computers
Serbia, Europe

StarOS, Mikrotik and CentOS/RHEL/Linux consultant
_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
http://lists.centos.org/mailman/listinfo/centos




[Index of Archives]     [CentOS]     [CentOS Announce]     [CentOS Development]     [CentOS ARM Devel]     [CentOS Docs]     [CentOS Virtualization]     [Carrier Grade Linux]     [Linux Media]     [Asterisk]     [DCCP]     [Netdev]     [Xorg]     [Linux USB]
  Powered by Linux