Re: Cluster Failover Failed

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Raul,

 

Yes, it seems like-stuff.  Thanks for pointing out the same still applies to RHEL5.6 . There is a opened bugzilla at https://bugzilla.redhat.com/show_bug.cgi?id=649705 .

 

Low priority, of course (for Redhat), as no response at all. They seem to ignore that sometimes we have to do demostrations to prospective customers, etc, and the image of all these messages popping out from the console and the logs are unforgettable.

 

Alvaro

 


De: linux-cluster-bounces@xxxxxxxxxx [mailto:linux-cluster-bounces@xxxxxxxxxx] En nombre de Martinez-Sanchez, Raul
Enviado el: miércoles, 15 de junio de 2011 14:55
Para: 'linux clustering'
Asunto: Re: Cluster Failover Failed

 

Hi Alvaro,

 

I have also opened a ticket with RedHat for the same reasons on rhel5u6 and a DS5020 and a DS3524 which I believe they are both active/active and multipath seems to treat them as active/passive, but I guess this is for another mailing list.

 

Raúl

 

From: linux-cluster-bounces@xxxxxxxxxx [mailto:linux-cluster-bounces@xxxxxxxxxx] On Behalf Of Alvaro Jose Fernandez
Sent: Wednesday, June 15, 2011 1:15 PM
To: linux clustering
Subject: Re: Cluster Failover Failed

 

Hi,

 

DOC-35489 only partionally approaches the problem. I have it too, on a passive/active IBM DS4000 array and RHEL5.5. I've excluded from lvm.conf any SAN partitions as per the note (and also made a new initrd boot, as lvm.conf is included at boot time as the / partition I have it LVM'ed) , but messages still apears on bootup. They always dissapear when multipathd service starts and its scsi_dh_rdac discipline is loaded.

 

Even opened a case with Redhat, and obtained the same response (but not workaround): "it's entirely harmless, they are normal".

 

Alvaro

 


De: linux-cluster-bounces@xxxxxxxxxx [mailto:linux-cluster-bounces@xxxxxxxxxx] En nombre de Martinez-Sanchez, Raul
Enviado el: miércoles, 15 de junio de 2011 13:11
Para: 'Linux-cluster@xxxxxxxxxx'
Asunto: Re: Cluster Failover Failed

 

Hi Balaji,

 

According to RedHat documentation some Storage Array Devices configured in active/passive mode and using multipath will display this I/O error messages, so this might also be your case (see https://access.redhat.com/kb/docs/DOC-35489), this link indicates that the messages are harmless and can be avoided following its instructions.

 

The logs you sent do not indicate anything related to fencing, so you might need to send the relevant info for that.

 

Cheers,

 

Raúl

 

 

From: linux-cluster-bounces@xxxxxxxxxx [mailto:linux-cluster-bounces@xxxxxxxxxx] On Behalf Of Balaji S
Sent: Tuesday, June 14, 2011 7:46 PM
To: Linux-cluster@xxxxxxxxxx
Subject: Cluster Failover Failed

 

Hi,

In my setup implemented 10 tow node cluster's which running mysql as cluster service, ipmi card as fencing device.

 

In my /var/log/messages i am keep getting the errors like below,

 

Jun 14 12:50:48 hostname kernel: end_request: I/O error, dev sdm, sector 0

Jun 14 12:50:48 hostname kernel: sd 3:0:2:2: Device not ready: <6>: Current: sense key: Not Ready

Jun 14 12:50:48 hostname kernel:     Add. Sense: Logical unit not ready, manual intervention required

Jun 14 12:50:48 hostname kernel: 

Jun 14 12:50:48 hostname kernel: end_request: I/O error, dev sdn, sector 0

Jun 14 12:50:48 hostname kernel: sd 3:0:2:4: Device not ready: <6>: Current: sense key: Not Ready

Jun 14 12:50:48 hostname kernel:     Add. Sense: Logical unit not ready, manual intervention required

Jun 14 12:50:48 hostname kernel: 

Jun 14 12:50:48 hostname kernel: end_request: I/O error, dev sdp, sector 0

Jun 14 12:51:10 hostname kernel: sd 3:0:0:1: Device not ready: <6>: Current: sense key: Not Ready

Jun 14 12:51:10 hostname kernel:     Add. Sense: Logical unit not ready, manual intervention required

Jun 14 12:51:10 hostname kernel: 

Jun 14 12:51:10 hostname kernel: end_request: I/O error, dev sdc, sector 0

Jun 14 12:51:10 hostname kernel: printk: 3 messages suppressed.

Jun 14 12:51:10 hostname kernel: Buffer I/O error on device sdc, logical block 0

Jun 14 12:51:10 hostname kernel: sd 3:0:0:2: Device not ready: <6>: Current: sense key: Not Ready

Jun 14 12:51:10 hostname kernel:     Add. Sense: Logical unit not ready, manual intervention required

Jun 14 12:51:10 hostname kernel: 

Jun 14 12:51:10 hostname kernel: end_request: I/O error, dev sdd, sector 0

Jun 14 12:51:10 hostname kernel: Buffer I/O error on device sdd, logical block 0

Jun 14 12:51:10 hostname kernel: sd 3:0:0:4: Device not ready: <6>: Current: sense key: Not Ready

Jun 14 12:51:10 hostname kernel:     Add. Sense: Logical unit not ready, manual intervention required

 

 

when i am checking the multipath -ll , this all devices are in passive path.

 

Environment :

 

RHEL 5.4 & EMC SAN

 

Please suggest how to overcome this issue. Support will be highly helpful.

Thanks in Advance

 

--
Thanks,
BSK

 



**************************************************************************************
This message is confidential and intended only for the addressee. If you have received this message in error, please immediately notify the postmaster@xxxxxxx and delete it from your system as well as any copies. The content of e-mails as well as traffic data may be monitored by NDS for employment and security purposes. To protect the environment please do not print this e-mail unless necessary.

NDS Limited. Registered Office: One London Road, Staines, Middlesex, TW18 4EX, United Kingdom. A company registered in England and Wales. Registered no. 3080780. VAT no. GB 603 8808 40-00
**************************************************************************************

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux