RE: Multipath not re-activating failed paths?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Depending on your version, I was instructed to raise the PROCESS' PRIORITY to
get the response.
 
Charles Polk
Systems Engineer, ViON Corporation
Voice: 202.467.5500 x236, Cell: 301.518.9266, Fax: 202.342.1404
Email:Charles.Polk@xxxxxxxx, Web: www.vion.com

________________________________

From: dm-devel-bounces@xxxxxxxxxx on behalf of Darryl Dixon
Sent: Thu 9/14/2006 7:53 PM
To: dm-devel@xxxxxxxxxx
Subject:  Multipath not re-activating failed paths?



Hi All,

I have a working dm-multipath set up with a multipath root device. For
some reason, while multipath seems to correctly use both paths, and will
gracefully handle the failing of a path (uninterrupted IO works OK), it
does not seem to want to detect once the failed path has come back up
again. In other words, in my two-path setup, it will load balance between
the paths, continue successfully on one path when one fails, but it will
then be 'stuck' on that path forever until the next reboot, even if the
first path is back up and otherwise working fine.

>From what I can understand of the multipath.conf settings, the paths
should be tested every 5 seconds, and should be marked 'active' once they
come back up.

How can I best go about debugging/investigating this?

My setup details:
Machine:     HP Blade BL25P with QLogic dual-ported HBA
Storage:     Two paths to SUN 3510
OS:          RHEL4 x86_64
DM package:  device-mapper-multipath-0.4.5-16.1.RHEL4
uname -r:    2.6.9-42.0.2.ELsmp

contents of /etc/multipath.conf:
----------8<----------[cut]
devnode_blacklist {
       devnode "^cciss!c[0-9]d[0-9]*"
}

defaults {
    user_friendly_names yes
    no_path_retry fail
    path_grouping_policy multibus
    failback immediate

}

multipaths {
    multipath {
        wwid   3500000e01190e340
        alias  os
    }
}
----------8<----------[cut]

Output of multipath -l:
----------8<----------[cut]
3500000e01190e100
[size=68 GB][features="0"][hwhandler="0"]
\_ round-robin 0 [active]
 \_ 0:0:3:0 sdd 8:48  [active]
 \_ 1:0:3:0 sdh 8:112 [active]

3500000e01190e3f0
[size=68 GB][features="0"][hwhandler="0"]
\_ round-robin 0 [active]
 \_ 0:0:1:0 sdb 8:16  [active]
 \_ 1:0:0:0 sde 8:64  [active]

os (3500000e01190e340)
[size=68 GB][features="0"][hwhandler="0"]
\_ round-robin 0 [active]
 \_ 0:0:0:0 sda 8:0   [active]
 \_ 1:0:2:0 sdg 8:96  [active]

3500000e01190e310
[size=68 GB][features="0"][hwhandler="0"]
\_ round-robin 0 [active]
 \_ 0:0:2:0 sdc 8:32  [active]
 \_ 1:0:1:0 sdf 8:80  [active]
----------8<----------[cut]

Contents of /dev/mapper/:
----------8<----------[cut]
brw-rw----  1 root disk 253,  3 Sep 15  2006 3500000e01190e100
brw-rw----  1 root disk 253,  2 Sep 15  2006 3500000e01190e310
brw-rw----  1 root disk 253,  1 Sep 15  2006 3500000e01190e3f0
crw-------  1 root root  10, 63 Sep 15  2006 control
brw-rw----  1 root disk 253,  0 Sep 15  2006 os
brw-rw----  1 root disk 253,  4 Sep 15  2006 os1
brw-rw----  1 root disk 253,  5 Sep 15  2006 os2
brw-rw----  1 root disk 253,  6 Sep 15  2006 os3
----------8<----------[cut]

Output of df -k:
----------8<----------[cut]
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/mapper/os2       50394996  29944792  17890248  63% /
/dev/mapper/os1         101086     23801     72066  25% /boot
none                   5036176         0   5036176   0% /dev/shm
----------8<----------[cut]


Any and all pointers or assistance appreciated.

regards,
Darryl Dixon
http://www.winterhouseconsulting.com <http://www.winterhouseconsulting.com/> 

--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel


<<winmail.dat>>

--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel

[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux