Re: Problems with multipathing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Roger Håkansson a écrit :
Roger Håkansson wrote:
With upstream I guess you mean 0.4.7 or "CVS-HEAD".

Haven't tried that, but just looking at the requirements tells me I'll
have a lot to do in order to even just to prepare to test it.

 Dependencies :
 Linux kernel

     * 2.6.10-rc*-udm2 or later
     * 2.6.11-mm* or later
     * 2.6.12-rc1 or later
 udev 050+
CentOS 4.3 have 2.6.9-34 and udev-039-10, and even though some stuff are
backported I guess I have to update both and I can only imagine the
amount of work that will render me, but I'll try to do it if I can find
the time...

	

0.4.7 seems to work better without updating kernel or udev, but not
entirely...

Unless I've gotten this wrong, with pathgroupingpolicy set to failover I
should get two pathgroups where only one is active an if the active
fails, the other pathgroup will become active, correct?
Multibus pathgrouping will place all paths in the same pathgroup so that
all paths will share the I/O when they are active and if some path
fails, the I/O is spread among the active paths, correct?

Multibus works just like I expect it to, but failover doesn't fail the
path entirely.

This is what 'multipath -ll' gives me after I have disconnected one HBA
from the fabric.

mpath1 (3600d0230000000000b0191489a946602)
[size=183 GB][features=0][hwhandler=0]
\_ round-robin 0 [prio=0][enabled]
 \_ 1:0:1:1 sdc 8:32  [active][faulty]
\_ round-robin 0 [prio=0][enabled]
 \_ 2:0:0:1 sdf 8:80  [active][ready]
mpath2 (3600d0230000000000b0191489a946600)
[size=97 GB][features=0][hwhandler=0]
\_ round-robin 0 [prio=0][active]
 \_ 1:0:1:0 sdb 8:16  [failed][faulty]
 \_ 2:0:0:0 sde 8:64  [active][ready]

Notice that sdc is faulty, but still active and both pathgroups are
enabled but none is active...

I've noticed this problem when I/O is active ( I was running a 'dd
if=/dev/zero of=/mount_point count=10000000000' to each mpath) when one
"path" fails, if there is no activity at all the failover works.

I don't know your hardware (vendor = IFT, product = A16F-R2221) but it seems assymmetrical. Most hardware in this familly need a hardware handler, and some need the "queue_if_no_path" feature set too.

You'll have to find how your array works and try to figure if some existing hardware handler does the good thing.

As a last resort, post the maximum techical details about what your hardware needs to activate backup paths, and hope that some good soul is willing to code the handler.

Regards,
cvaroqui

--

dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel

[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux