Re: clusvcadm

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Ok i got this when running dlm_tool lockdebug rgmanager

# dlm_tool lockdebug rgmanager 


Resource ffff880082e131c0 Name (len=22) "rg="vm:wadev.domain""  

Master Copy

Granted Queue

00c1054d NL Remote:   3 00c0003f

02790500 NL Remote:   2 022d9a84

03830554 EX

Conversion Queue

Waiting Queue


Resource ffff880082e132c0 Name (len=8) "usrm::vf"  

Local Copy, Master is node 2

Granted Queue

Conversion Queue

Waiting Queue


Thanks!

Paras.



On Wed, May 7, 2014 at 5:01 PM, emmanuel segura <emi2fast@xxxxxxxxx> wrote:
mount -t debugfs none /sys/kernel/debug/, i now this happen when a fencing calls had problem


2014-05-07 23:45 GMT+02:00 Paras pradhan <pradhanparas@xxxxxxxxx>:

Yeah they work fine . This started when we had a network problem.

I see this:

dlm_tool lockdebug rgmanager 

can't open /sys/kernel/debug/dlm/rgmanager_locks: No such file or directory





On Wed, May 7, 2014 at 4:34 PM, emmanuel segura <emi2fast@xxxxxxxxx> wrote:
dlm_tool lockdebug rgmanager or dlm_tool lockdump rgmanager, anyway you can tell me when this problem started to happen? are you sure your fencing is working ok?


2014-05-07 23:01 GMT+02:00 Paras pradhan <pradhanparas@xxxxxxxxx>:

"dlm_tools ls lockdebug" you mean?

"dlm_tool ls" returns

--

Usage:


dlm_tool [options] [join|leave|lockdump|lockdebug]


Options:

  -v               Verbose output

  -d <n>           Resource directory off/on (0/1), default 0

  -m <mode>        Permission mode for lockspace device (octal), default 0600

  -M               Print MSTCPY locks in lockdump (remote locks, locally mastered)

  -h               Print this help, then exit

  -V               Print program version information, then exit

-





On Wed, May 7, 2014 at 3:40 PM, emmanuel segura <emi2fast@xxxxxxxxx> wrote:
dlm_tool ls ?


2014-05-07 21:05 GMT+02:00 Paras pradhan <pradhanparas@xxxxxxxxx>:
Well I have a qdisk with vote 3 . Thats why it is 6.

Here is the log. I see some GFS hung but no issue with GFS mounts at this time.


I am seeing this at clumond.log not sure if this is related and what is it.

Mon May  5 21:58:20 2014 clumond: Peer (vprd3.domain): pruning queue 23340->11670

Tue May  6 01:38:57 2014 clumond: Peer (vprd3.domain): pruning queue 23340->11670

Tue May  6 01:39:02 2014 clumond: Peer (vprd1.domain): pruning queue 23340->11670


Thanks
Paras


On Wed, May 7, 2014 at 1:51 PM, emmanuel segura <emi2fast@xxxxxxxxx> wrote:
where is your log?

I don't think this is the problem, but anyway from your config i saw <cman expected_votes="6"......

from man cman

Expected votes
       The  expected  votes  value  is used by cman to determine quorum.  The cluster is quorate if the sum of votes of existing members is
       over half of the expected votes value.  By default, cman sets the expected votes value to be the sum of votes of all nodes listed in
       cluster.conf.  This can be overriden by setting an explicit expected_votes value as follows:

If you remove this expected_votes="6", the cluster will set this parameter to 3



2014-05-07 20:38 GMT+02:00 emmanuel segura <emi2fast@xxxxxxxxx>:

from your previous outpout of cman_tool services

[1 2 3]

dlm              1     rgmanager        00030001 none      


2014-05-07 20:24 GMT+02:00 Paras pradhan <pradhanparas@xxxxxxxxx>:

Oh. How did you see that?

Here is the cluster.conf http://pastebin.com/DveLMGXT

Thanks!
-Paras.


On Wed, May 7, 2014 at 1:07 PM, emmanuel segura <emi2fast@xxxxxxxxx> wrote:
i saw your rgmanager lockspace is there, you see any error in your msg? can show your cluster config?


2014-05-07 19:52 GMT+02:00 Paras pradhan <pradhanparas@xxxxxxxxx>:

Thats looks good.

#cman_tool services

type             level name             id       state       

fence            0     default          00010001 none        

[1 2 3]

dlm              1     clvmd            00020001 none        

[1 2 3]

dlm              1     guest_comp_vms1  00020003 none        

[1 2 3]

dlm              1     guest_comp_vms2  00040003 none        

[1 2 3]

dlm              1     guest_comp_vms3  00060003 none        

[1 2 3]

dlm              1     rgmanager        00030001 none        

[1 2 3]

gfs              2     guest_comp_vms1  00010003 none        

[1 2 3]

gfs              2     guest_comp_vms2  00030003 none        

[1 2 3]

gfs              2     guest_comp_vms3  00050003 none        

[1 2 3]



On Wed, May 7, 2014 at 12:46 PM, emmanuel segura <emi2fast@xxxxxxxxx> wrote:
cman_tool services?


2014-05-07 19:28 GMT+02:00 hugo aldunce <haldunce@xxxxxxxxx>:

ohh good luck!


2014-05-07 11:14 GMT-04:00 Paras pradhan <pradhanparas@xxxxxxxxx>:
Hi,
We had some network problem the other day and today I noticed the clusvcadm commands are not working. For example: it does not want to stop the service, migrate a vm etc etc. On one of the nodes clustat does not show any running services. I should restart the rgmanager?

This is RHEL 5.

Thanks
Paras.




--
---------------------------------------------------------------------------------------------------------------------
Hugo Aldunce E
Tel. 09 82121045
mail: haldunce@xxxxxxxxx
---------------------------------------------------------------------------------------------------------------------

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster



--
esta es mi vida e me la vivo hasta que dios quiera

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster


--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster



--
esta es mi vida e me la vivo hasta que dios quiera

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster


--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster



--
esta es mi vida e me la vivo hasta que dios quiera



--
esta es mi vida e me la vivo hasta que dios quiera

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster


--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster



--
esta es mi vida e me la vivo hasta que dios quiera

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster


--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster



--
esta es mi vida e me la vivo hasta que dios quiera

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster


--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster



--
esta es mi vida e me la vivo hasta que dios quiera

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

-- 
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux