Re: Cluster node hangs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Sorry for the delay friends. Actually, logs are scattered in different log files:

 

1.       For rgmamager logs I have configured /var/log/cluster.log

2.       Other cluster logs are going to messages file. Presently I am trying to find a way using which I can gather all the logs under one file other than messages. Seems I can use <logging> feature in cluster.conf, comments??

 

I am having openldap logging enabled on this server which is also using local4 facility and logs from cluster and ldap are getting mixed up.

 

 

From: linux-cluster-bounces@xxxxxxxxxx [mailto:linux-cluster-bounces@xxxxxxxxxx] On Behalf Of dOminic
Sent: Sunday, February 13, 2011 8:03 PM
To: linux clustering
Subject: Re: Cluster node hangs

 

Hi,

 

Whats the msg you are getting in logs ?. It would be great if you could attach log mesgs along with cluster.conf 

 

-dominic 

 

On Sun, Feb 13, 2011 at 3:49 PM, Sachin Bhugra <sachinbhugra@xxxxxxxxxxx> wrote:

Thank for the reply and link. However, GFS2 is not listed in fstab, it is only handled by cluster config.


Date: Sun, 13 Feb 2011 10:52:51 +0100
From: ekuric@xxxxxxxxxx
To: linux-cluster@xxxxxxxxxx
Subject: Re: Cluster node hangs



On 02/13/2011 10:41 AM, Elvir Kuric wrote:

On 02/13/2011 10:14 AM, Sachin Bhugra wrote:

Hi ,

I have setup a two node cluster in lab, with Vmware Server, and hence used manual fencing. It includes a iSCSI GFS2 partition and it service Apache in Active/Passive mode.

Cluster works and I am able to relocate service between nodes with no issues. However, the problem comes when I shutdown the node, for testing, which is presently holding the service. When the node becomes unavailable, service gets relocated and GFS partition gets mounted on the other node, however it is not accessible. If I try to do a "ls/du" on GFS partition, the command hangs. On the other hand the node which was shutdown gets stuck at "unmounting file system".

I tried using fence_manual -n nodename and then fence_ack_manual -n nodename, however it still remains the same.

Can someone please help me is what I am doing wrong?

Thanks,


--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

It would be good to see  /etc/fstab configuration used on cluster nodes. If /gfs partition is mounted manually it will not be unmounted correctly in case you restart node ( and not executing umount prior restart ), and will hang during shutdown/reboot process.

More at:  http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html-single/Global_File_System_2/index.html


Edit: above link, section 3.4 Special Considerations when Mounting GFS2 File Systems



Regards,

Elvir

 

 


--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

 

-- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster


--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

 

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux