Re: RedHat RHEL 5U4 NFS Cluster nodes randomly reboot

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I think I may have found my problem. However, I am not sure how to fix it.

I noticed that node one shows the quorum disk to be /dev/dm-2 and node 2 shows the quorum disk to be /dev/dm-3

How can I change these ?


Bennie Thomas wrote:
Hello Steve,

I have the cluster traffic going over a private interface. I know have the nodes connected point-to-point. I have thought about the network traffic, However, with a dedicated interface, I should not have this problem.

I have built, numerous Clusters from different vendors. This particular cluster is an active/passive. I have noticed
that the passive node in the cluster is the one that reboots the most.

Thanks for any/all input....



Steven Whitehouse wrote:
Hi,

On Wed, 2010-06-30 at 09:22 -0500, Randy Zagar wrote:
Yes. My experience is that you can't currently nfs-export *any* GFS or GFS2 filesystems.

You can, but there are only a fairly small number of configurations
which will actually work from the larger number of possible
configurations. We do hope to expand that a bit in the future, but for
the time being its best to stick to a active/passive failover export
which is not mixed with any other protocol (Samba) or any local
applications.

Exporting EXT3/EXT4 filesystems, however, doesn't appear to be a problem.

-Randy Zagar <zagar@xxxxxxxxxxxxxxxx>

On 06/28/2010 05:22 PM, linux-cluster-request@xxxxxxxxxx wrote:
From: Bennie Thomas<Bennie_R_Thomas@xxxxxxxxxxxx>
Subject:  RedHat RHEL 5U4 NFS Cluster nodes randomly
    reboot
Message-ID:<4C290BE0.3090707@xxxxxxxxxxxx>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

I currently have 2 DL380 G6 with and HP MSA2312 disk array. Running
Redhat 5u4 64bit.  I have a quorum disk.  I use the Cluster as an
Active/passive NFS Cluster
The problem I am having is one or both of the nodes will randomly
reboot. Has anyone experienced this problem

Is the node being fenced? This might be down to excessive network
traffic blocking the cluster traffic and making it appear as if the node
is down when it isn't, or something similar to that. Do you get any log
messages?

Steve.

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster



--
Bennie Thomas
Sr. Information Systems Technologist II
Raytheon Company

972.205.4126
972.205.6363 fax
888.347.1660 pager
Bennie_R_Thomas@xxxxxxxxxxxx


DISCLAIMER: This message contains information that may be confidential and privileged. Unless you are the addressee (or authorized to receive mail for the addressee), you should not use, copy or disclose to anyone this message or any information contained in this message. If you have received this message in error, please so advise the sender by reply e-mail and delete this message. Thank you for your cooperation.

Any views or opinions presented are solely those of the author and do not necessarily represent those of Raytheon unless specifically stated. Electronic communications including email may be monitored by Raytheon
for operational or business reasons.




--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster


[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux