multipathd & EMC Clariion CX-4

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

 

I'm experiencing problems trying to set up a multipathd config with a EMC Clariion CX-4 Storage.

It seems that the CX4 is actually active/passive regarding SPs. My server  has QLogic Fiber Cards. My multipath-tools version is 0.4.8.

Here is my multipath.conf :

 

##

## This is a template multipath-tools configuration file

## Uncomment the lines relevent to your environment

##

defaults {

                user_friendly_names   yes

}

blacklist {

                #devnode "*"

                devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"

                devnode "^hd[a-z][[0-9]*]"

                devnode "^cciss!c[0-9]d[0-9]*[p[0-9]*]"

}

devices {

                device {

                               vendor                                "DGC  "

                               product                                               "*"

                               path_grouping_policy   group_by_prio

                               getuid_callout          "/lib/udev/scsi_id -g -u -d /dev/%n"

                               prio_callout                       "/sbin/mpath_prio_emc /dev/%n"

                               hardware_handler         "1 emc"

                               features "1 queue_if_no_path"

#                             failback                                15

#                             rr_weight                           priorities

                               no_path_retry                 300

                               path_checker                   emc_clarion

                               failback                                immediate

#                             rr_min_io                           100

                               product_blacklist            LUN_Z

                }

}

 

multipaths {

 

                multipath {

                               wwid                                    360060160174026008cf8cef961cde011

                               alias                                      NPCD

                }

 

                multipath {

                wwid                    36006016017402600f2ed610b62cde011

                alias                   NDO

        }

               

                multipath {

                wwid                    360060160174026000c984cc67acde011

                alias                   NAGIOS

        }

 

 

}

 

And here is the result of a multipath -ll :

 

sdg: checker msg is "directio checker reports path is down"

sdm: checker msg is "directio checker reports path is down"

NDO (36006016017402600f2ed610b62cde011) dm-8 DGC     ,RAID 5       

[size=50G][features=1 queue_if_no_path][hwhandler=1 emc]

\_ round-robin 0 [prio=2][active]

 \_ 2:0:0:1 sdd 8:48  [active][ready]

 \_ 5:0:0:1 sdj 8:144 [active][ready]

\_ round-robin 0 [prio=0][enabled]

 \_ 2:0:1:1 sdg 8:96  [active][faulty]

 \_ 5:0:1:1 sdm 8:192 [active][faulty]

sde: checker msg is "directio checker reports path is down"

sdk: checker msg is "directio checker reports path is down"

NAGIOS (360060160174026000c984cc67acde011) dm-6 DGC     ,RAID 5       

[size=40G][features=1 queue_if_no_path][hwhandler=1 emc]

\_ round-robin 0 [prio=2][active]

 \_ 2:0:1:2 sdh 8:112 [active][ready]

 \_ 5:0:1:2 sdn 8:208 [active][ready]

\_ round-robin 0 [prio=0][enabled]

 \_ 2:0:0:2 sde 8:64  [active][faulty]

 \_ 5:0:0:2 sdk 8:160 [active][faulty]

sdc: checker msg is "directio checker reports path is down"

sdi: checker msg is "directio checker reports path is down"

NPCD (360060160174026008cf8cef961cde011) dm-7 DGC     ,RAID 5       

[size=100G][features=1 queue_if_no_path][hwhandler=1 emc]

\_ round-robin 0 [prio=2][active]

 \_ 2:0:1:0 sdf 8:80  [active][ready]

 \_ 5:0:1:0 sdl 8:176 [active][ready]

\_ round-robin 0 [prio=0][enabled]

 \_ 2:0:0:0 sdc 8:32  [active][faulty]

 \_ 5:0:0:0 sdi 8:128 [active][faulty]

 

 

So when trying to write on a virtual device /dev/mapper/NAGIOS (for exemple) I see many I/O errors on 'faulty disks'. When tresspassing LUNs on the clariion, I see ready disks becoming faulty and faulty ones becoming ready ... I also have udevd (udev-work) burning my CPUs while trying to rename '/dev/disk/by-id/wwn-0xetcetc...udev-tmp'

What's wrong with my config ? Am I missing something ?

Thx by advance for your help,

Regards,

 

Cordialement,

 

--

Stéphane Neveu

Ingénieur Opensource / RedHat Technical Manager

THALES - Critical Information Systems

DIVISION Security Solutions & Services / Server Mgmt

Tél 01 73 17 03 15

--

 

--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel

[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux