DM-MP, Xen 5.0.0 and NetApp

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi All,
 
I'm having problems to put the Multipathing tools to work correctly with NetApp Cluster Solution.
 
I have a xenpool with 3 machines the multipathing are enabled on them, the HBA are Qlogic, two FAS3140 controllers in Cluster. I created a lun to be a storage repository (SR), the lun is visible and was recognized on the 3 machines, but when we say to the XenServer to build a SR with this Lun, all I/O operations arrive to the FAS3140 by the non-optimized path.
 
XenServer multipath.conf
 
defaults {
        user_friendly_names no
}
devices {
        device {
                vendor                  "NETAPP"
                product                 "LUN"
                path_grouping_policy    group_by_prio
                getuid_callout          "/sbin/scsi_id -g -u -s /block/%n"
                prio_callout            "/sbin/mpath_prio_netapp /dev/%n"
                features                "1 queue_if_no_path"
                path_checker            directio
                failback                immediate
        }
}
 
We made two tests, one with prio_callout as /sbin/mpath_prio_alua and other with prio_callout as /sbin/mpath_prio_netapp, both returns the same results.
 
Lun Stats from the Storage Controller
 
 Read Write Other QFull   Read  Write Average   Queue     Partner  Lun
  Ops   Ops   Ops           kB     kB Latency  Length   Ops     kB
    0     0     0     0      0      0    0.00    0.00     0      0 /vol/vol_cmprh01/cmprh01_disk01.lun
---
    0     0     0     0      0      0    0.00    0.00     0      0 /vol/vol_cmprh01/cmprh01_disk01.lun
---
    0     0     0     0      0      0    0.00    0.00     0      0 /vol/vol_cmprh01/cmprh01_disk01.lun
---
   10     0    41     0      5      0    0.37    5.04    23      3 /vol/vol_cmprh01/cmprh01_disk01.lun
---
    0     0     0     0      0      0    0.00    0.00     0      0 /vol/vol_cmprh01/cmprh01_disk01.lun
---
    0     0     0     0      0      0    0.00    0.00     0      0 /vol/vol_cmprh01/cmprh01_disk01.lun
---
    0     0     0     0      0      0    0.00    0.00     0      0 /vol/vol_cmprh01/cmprh01_disk01.lun
---
    0     0     0     0      0      0    0.00    0.00     0      0 /vol/vol_cmprh01/cmprh01_disk01.lun
---
    7     0    10     0      3      0    0.52    5.01     5      1 /vol/vol_cmprh01/cmprh01_disk01.lun
---
    7     0    11     0     28      0    0.44    0.08     6     12 /vol/vol_cmprh01/cmprh01_disk01.lun
---
    0     0     0     0      0      0    0.00    0.00     0      0 /vol/vol_cmprh01/cmprh01_disk01.lun

 
When we have an I/O operation, more than 50% arrives to the storage controller by the Partner interface.
 
igroup configuration
 
NETAPP314001> igroup show -v
    xenpool_ntap01 (FCP):
        OS Type: linux
        Member: 50:01:43:80:03:b9:36:f8 (logged in on: vtic, 0c, 0a)
        Member: 50:01:43:80:03:b9:36:fa (logged in on: 0d, 0b, vtic)
        Member: 50:01:43:80:03:b9:36:d8 (logged in on: 0c, 0a, vtic)
        Member: 50:01:43:80:03:b9:36:da (logged in on: 0d, 0b, vtic)
        Member: 50:01:43:80:03:b9:36:d4 (logged in on: 0c, 0a, vtic)
        Member: 50:01:43:80:03:b9:36:d6 (logged in on: 0d, 0b, vtic)
        Member: 50:01:43:80:03:b9:37:54 (logged in on: 0c, 0a, vtic)
        Member: 50:01:43:80:03:b9:37:56 (logged in on: 0d, 0b, vtic)
        Member: 50:01:43:80:03:b9:37:58 (logged in on: 0c, 0a, vtic)
        Member: 50:01:43:80:03:b9:37:5a (logged in on: 0d, 0b, vtic)
        Member: 50:01:43:80:03:b9:37:10 (logged in on: 0c, 0a, vtic)
        Member: 50:01:43:80:03:b9:37:12 (logged in on: 0d, 0b, vtic)
        Member: 50:01:43:80:03:b9:37:20 (logged in on: 0c, 0a, vtic)
        Member: 50:01:43:80:03:b9:37:22 (logged in on: 0d, 0b, vtic)
        Member: 50:01:43:80:03:b9:37:0c (logged in on: vtic, 0c, 0a)
        Member: 50:01:43:80:03:b9:37:0e (logged in on: 0d, 0b, vtic)
        Member: 50:01:43:80:03:b9:79:b8 (logged in on: 0c, 0a, vtic)
        Member: 50:01:43:80:03:b9:79:ba (logged in on: 0d, 0b, vtic)
        Member: 50:01:43:80:03:b9:35:54 (logged in on: vtic, 0c, 0a)
        Member: 50:01:43:80:03:b9:35:56 (logged in on: 0d, 0b, vtic)
        Member: 50:01:43:80:03:ba:0c:c8 (logged in on: vtic, 0c, 0a)
        Member: 50:01:43:80:03:ba:0c:ca (logged in on: 0d, 0b, vtic)
        Member: 50:01:43:80:03:b9:35:50 (logged in on: 0c, 0a, vtic)
        Member: 50:01:43:80:03:b9:35:52 (logged in on: 0d, 0b, vtic)
        Member: 50:01:43:80:03:ba:0c:d0 (logged in on: vtic, 0c, 0a)
        Member: 50:01:43:80:03:ba:0c:d2 (logged in on: 0d, 0b, vtic)
        Member: 50:01:43:80:03:b9:79:5c (logged in on: 0c, 0a, vtic)
        Member: 50:01:43:80:03:b9:79:5e (logged in on: 0d, 0b, vtic)
        Member: 50:01:43:80:03:ba:0c:dc (logged in on: 0c, 0a, vtic)
        Member: 50:01:43:80:03:ba:0c:de (logged in on: 0d, 0b, vtic)
        Member: 50:01:43:80:03:b9:35:28 (logged in on: vtic, 0c, 0a)
        Member: 50:01:43:80:03:b9:35:2a (logged in on: 0d, 0b, vtic)
        ALUA: Yes
 
The ALUA was configured at the igroup.
 
We made another test putting a Windows machine with NetApp DSM(Multipathing software) and everything works fine.
 
Anyone could help me?
 
Rgds,
 
--
Rodrigo Nascimento
--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel

[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux