Re: CLVMD hangs on 2nd node startup and hangs all gfs nodes.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I just got a blank page for you cluster.conf. It must be /really/ slimmed down. ;)

 brassow

On Apr 28, 2008, at 7:15 AM, Tracey Flanders wrote:



I tested creating a GFS disk with 2 nodes started in the cluster without using LVMs and CLVMD stop. I mounted the disk on the first node but when I mounted the 2nd node it did the same thing. So it seems its something other than CLVMD. I've attached my cluster.conf. It's kind of dumbed down because I was troubleshooting. So I removed the GFS mount for a services, etc.
This the config I used.























































If you suspect a problem with clvmd, you could simply remove it from
the equation and retest, right?

You could just use the underlying iSCSI device and mkfs.gfs on
that.... at least for testing if clvmd is the problem.

I suppose you could also test if clvmd is the problem by testing the
logical volumes without GFS in the mix. IOW, create some LVs and read/
write to them at the same time from different machines.  If this is
working, the file system should work.  If the file system doesn't,
then the problem is probably higher up than clvmd.

 brassow

On Apr 25, 2008, at 11:34 AM, Tracey Flanders wrote:

I've been trying to setup a 3 server cluster with GFS mounted over
iSCSI on Qemu Virtual Machines. A 4th server acts as a iSCSI Target.
I found and article that explains my issue, but I can't seem to
figure out what the solution is. QUOTED from :http://kbase.redhat.com/faq/FAQ_51_10923.shtm
After successfully setting up a cluster, cman_tool shows the
cluster is healthy. Mounting the gfs mount on the first node works
successfully. However, when mounting gfs on the second node, the
mount command hangs. Writing to a file on the first node also hangs.
On the second node, the following error is seen in /var/log/
messages: Jul 18 14:49:27 blade3 kernel: Lock_Harness 2.6.9-72.2
(built Apr 24 2007 12:45:55) installed Jul 18 14:49:27 blade3
kernel: GFS 2.6.9-72.2 (built Apr 24 2007 12:46:12) installed Jul 18
14:52:53 blade3 kernel: GFS: Trying to join cluster "lock_dlm",
"vcomcluster:testgfs" Jul 18 14:52:53 blade3 kernel: Lock_DLM (built
Apr 24 2007 12:45:57) installed Jul 18 14:52:53 blade3 kernel: dlm:
connect from non cluster node Jul 18 14:52:53 blade3 kernel: dlm:
connect from non cluster node END QUOTE My Virtual Machines only
have one interface so I still can't figure out why this is
happening. I can successfully mount the GFS partition on any one
node but as soon as I try to start the clvmd on a 2nd node it hangs
the whole cluster. I'm wondering if its a Qemu VM network issue?
Each host can ping each other by name and ip. The cluster works fine
but I cant get GFS to work on th VMs. Is it possible to debug the
clvmd to see what IP Address it is sending? Thanks, Tracey Flanders
In a rush? Get real-time answers with Windows Live Messenger. --
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

_________________________________________________________________
Spell a grand slam in this game where word skill meets World Series. Get in the game.
http://club.live.com/word_slugger.aspx?icid=word_slugger_wlhm_admod_april08

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux