RE: CLVM <SOLVED>

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

The problem has been solved. There was nothing wrong with the setup; it
was the router which was giving the problems (the internal router runs
RHEL4 and forwarding was not properly configured). I found out about the
problem when I decide to start cman with the external-facing network
interface (the necessary changes were made to /etc/hosts & cluster.conf)
instead of the private one. 

To specify the hostname cman service uses: cman_tool join -n
"hostname.domain.com"

Hope the information helps anyone who encounters the same problem.

Regards,
Bernard Chew

-----Original Message-----
From: linux-cluster-bounces@xxxxxxxxxx
[mailto:linux-cluster-bounces@xxxxxxxxxx] On Behalf Of Bernard Chew
Sent: Monday, July 09, 2007 11:54 PM
To: linux clustering
Subject: RE:  CLVM

Hi,

I should probably provide the info from "cman_tool nodes" as well (in
addition to the info below) so that the info is clearer...

[root@server2 ~]# cman_tool nodes
Node  Votes Exp Sts  Name
   1    1    3   M   server3 <- first node which started CLVMD
   3    1    3   M   server2

Regards,
Bernard

-------------------------

-----Original Message-----
From: Bernard Chew 
Sent: Monday, July 09, 2007 10:46 PM
To: 'linux clustering'
Subject: RE:  CLVM

Hi,

Thanks for the quick reply. Here is the info that I gathered;

--------------------------------------------
Node 1 (which started CLVMD successfully):

[root@server3]# cat /proc/cluster/dlm_debug
clvmd move flags 0,1,0 ids 0,3,0
clvmd move use event 3
clvmd recover event 3 (first)
clvmd add nodes
clvmd total nodes 1
clvmd rebuild resource directory
clvmd rebuilt 0 resources
clvmd recover event 3 done
clvmd move flags 0,0,1 ids 0,3,3
clvmd process held requests
clvmd processed 0 requests
clvmd recover event 3 finished
clvmd move flags 1,0,0 ids 3,3,3
clvmd move flags 0,1,0 ids 3,4,3
clvmd move use event 4
clvmd recover event 4
clvmd add node 3

[root@server3 ~]# cman_tool services
Service          Name                              GID LID State
Code
Fence Domain:    "default"                           1   2 run       -
[1 2 3]

DLM Lock Space:  "clvmd"                             2   3 update
U-4,1,3
[1 3]

--------------------------------------------
Node 2 (which just wait forever):

[root@server2 ~]# cat /proc/cluster/dlm_debug
clvmd move flags 0,1,0 ids 0,2,0
clvmd move use event 2
clvmd recover event 2 (first)
clvmd add nodes

[root@server2 ~]# cman_tool services
Service          Name                              GID LID State
Code
Fence Domain:    "default"                           1   2 run       -
[1 2 3]

DLM Lock Space:  "clvmd"                             2   3 join
S-6,20,2
[1 3]
--------------------------------------------

Thanks for any help,
Bernard

-----Original Message-----
From: linux-cluster-bounces@xxxxxxxxxx
[mailto:linux-cluster-bounces@xxxxxxxxxx] On Behalf Of Robert Gil
Sent: Monday, July 09, 2007 8:29 PM
To: linux clustering
Subject: RE:  CLVM

What error do you get? 


Robert Gil
Linux Systems Administrator
American Home Mortgage

-----Original Message-----
From: linux-cluster-bounces@xxxxxxxxxx
[mailto:linux-cluster-bounces@xxxxxxxxxx] On Behalf Of Bernard Chew
Sent: Saturday, July 07, 2007 3:57 AM
To: linux-cluster@xxxxxxxxxx
Subject:  CLVM

Hi,

I have problems starting clvmd in a second node (after starting it
successfully on the first node) of a newly created 4-nodes cluster; no
problems staring the service first time on any nodes. Running "cman_tool
services" will show that the second node which started clvmd is in a
"join" status while the first node show a "update" status. This remains
even after a long period of time.

Given that the directories (i.e. /var /usr / ) are created using the
default lvm manager during installation, and I install the
lvm2-cluster-2.02.06-7.0.RHEL4.x86_64.rpm subsequently as part of the
requirements to set up GFS, will this cause the clvmd not to start
properly? I have no problems starting ccsd, cman and fenced.

Thanks in advance,
Bernard Chew
IT Operations Engineer


--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster


--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster


--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux