question regarding multipath & Linux 2.6

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

Recently my department had a SAN installed, and I am in the process of 
setting up one of the first Linux machines connected to it.  The machine 
is running Red Hat Enterprise AS4 (x86_64), which uses Linux kernel 
version 2.6.9-11.ELsmp.

The SAN shows up twice in the kernel, as /dev/sdb and /dev/sdc.  /dev/sdb 
is inaccessible (I get a bunch of "Buffer I/O error on device sdb" kernel 
errors), but /dev/sdc works fine.  According to the administrator of the 
SAN, the SAN shows up twice because there are two paths to the device, 
each going through a different storage processor (SP).  At any time, 
only one SP is active, which is why /dev/sdb is inaccessible but /dev/sdc 
works fine.  The administrator informed me that it is possible for the 
paths to switch (i.e. /dev/sdc becomes inactive and /dev/sdb becomes 
active), so I need to have some kind of multipathing software installed.  
He told me to use PowerPath, but I'd rather not have to reinstall or 
rebuild kernel modules every time there is a kernel upgrade, so I'm 
looking into Linux's built-in multipath support.

I followed the very straightforward instructions available here:
http://www.centos.org/docs/4/html/rhel-ig-s390-multi-en-4/s1-s390info-raid.html#S2-S390INFO-MULTIPATH

I created /etc/mdadm.conf, then ran:
mdadm -C /dev/md0 --level=multipath --raid-devices=2 /dev/sdb /dev/sdc

Initially after creation, "mdadm --detail" reports that both paths are up 
and active:
    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc

However, when I actually start using the device, /dev/sdb is removed and 
is set faulty (which makes sense since /dev/sdb is inaccessible):
    Number   Major   Minor   RaidDevice State
       0       0        0       -1      removed
       1       8       32        1      active sync   /dev/sdc
       2       8       16       -1      faulty   /dev/sdb

My question is this:  If the paths are switched while my machine is up, 
will Linux be smart enough to check if /dev/sdb has become active when 
/dev/sdc becomes faulty?  Or will the whole device fail?

thanks,
Jim Faulkner


-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux