Hi, I have upgraded from SLES9-SP2 to SLES9-SP3. The purpose was to fix some problems related to
OCFS2. I have many warnings (as you can see below) in my /var/log/messages
related to multipath (i think), indicating that the devices aren't ready. Maybe the multipath.conf needs some extra
configuration (please see my multipath.conf). The sdc and sdd devices are disks out on the EMC. Can anyone help me with this problem ? ERROR MESSAGES: (...) Jan 27 16:16:18 siipstestes kernel: Device sdc not
ready. Jan 27 16:16:18 siipstestes kernel: Device sdd not
ready. Jan 27 16:16:29 siipstestes kernel: Device sdc not
ready. Jan 27 16:16:29 siipstestes kernel: Device sdd not
ready. Jan 27 16:16:40 siipstestes kernel: Device sdc not
ready. Jan 27 16:16:40 siipstestes kernel: Device sdd not
ready. Jan 27 16:16:51 siipstestes kernel: Device sdc not
ready. Jan 27 16:16:51 siipstestes kernel: Device sdd not
ready. Jan 27 16:17:02 siipstestes kernel: Device sdc not
ready. Jan 27 16:17:02 siipstestes kernel: Device sdd not
ready. Jan 27 16:17:13 siipstestes kernel: Device sdc not
ready. Jan 27 16:17:13 siipstestes kernel: Device sdd not
ready. Jan 27 16:17:24 siipstestes kernel: Device sdc not
ready. Jan 27 16:17:24 siipstestes kernel: Device sdd not
ready. Jan 27 16:17:35 siipstestes kernel: Device sdc not
ready. Jan 27 16:17:35 siipstestes kernel: Device sdd not
ready. Jan 27 16:17:46 siipstestes kernel: Device sdc not
ready. Jan 27 16:17:46 siipstestes kernel: Device sdd not
ready. (...) server:~ # multipath -v2 -l dm names N dm table LUN_TESTES N dm table LUN_TESTES N dm status LUN_TESTES N dm info LUN_TESTES O LUN_TESTES (360060160256014006adcbf64a644da11) [size=458
GB][features="0"][hwhandler="0"] \_ round-robin 0 [active] \_ 3:0:0:0 sdc 8:32 [failed][faulty] \_ 3:0:1:0 sdd 8:48 [failed][faulty] \_ 3:0:2:0 sde 8:64 [active][ready] \_ 3:0:3:0 sdf 8:80 [active][ready] Here is my MULTIPATH.CONF file: defaults { multipath_tool "/sbin/multipath -v
0 -S" udev_dir /dev polling_interval 10 default_selector "round-robin
0" #default_path_grouping_policy failover default_path_grouping_policy multibus default_getuid_callout "/sbin/scsi_id
-g -u -s /block/%n" #default_prio_callout "/bin/false" failback immediate } # # name : devnode_blacklist # scope : multipath & multipathd # desc : list of device names to discard as not
multipath candidates # default : cciss, fd, hd, md, dm, sr, scd, st, ram,
raw, loop # devnode_blacklist { devnode cciss devnode fd devnode hd devnode md devnode dm- devnode sr devnode scd devnode st devnode ram devnode raw devnode loop devnode sda devnode sdb } # # name : multipaths # scope : multipath & multipathd # desc : list of multipaths finest-grained
settings # multipaths { # # name : multipath # scope : multipath & multipathd # desc : container for settings that
apply to one specific multipath # multipath { wwid 360060160256014006adcbf64a644da11 alias LUN_TESTES } } HARDWARE: - DELL 2850 - 1x QLogic 2340 (BIOS 1.47) to acess my Storage (EMC
Clariion CX300) SOFTWARE: - SLES9 (32 bit) + SP3 (with Multipath and OCFS2
configured) - Oracle Clusterware Release 2 (10.2.0.1.0) - Oracle Database 10g Release 2 (10.2.0.1.0) - Qlogic driver: qlafc-linux-8.01.00-4-install.tgz.gz - Naviagent CLI: naviagentcli-6.16.0.4.63-1.i386.rpm - SANsurfer: emc_sansurfer2.0.30b52_linux_install.bin - Multipath 0.4.5-0.11 Best regards,
|
-- dm-devel@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/dm-devel