On 08/13/12 16:38, Chip Burke wrote:
Ricci is seemingly not working through either Luci nor cman_tool. There does't seem to be a lot of logging to go off of (at least I haven't found it) but what I did find in the Luci log is as follows: 15:51:06,793 ERROR [luci.lib.ricci_communicator] Error receiving header from node2.domain.local:11111 Traceback (most recent call last): File "/usr/lib64/python2.6/site-packages/luci/lib/ricci_communicator.py", line 121, in __init__ hello = self.__receive(self.__timeout_init) File "/usr/lib64/python2.6/site-packages/luci/lib/ricci_communicator.py", line 503, in __receive errstr = _('Error reading from %s:%d: %s') \ File "/usr/lib/python2.6/site-packages/pylons/i18n/translation.py", line 106, in ugettext return pylons.translator.ugettext(value) File "/usr/lib/python2.6/site-packages/paste/registry.py", line 137, in __getattr__ return getattr(self._current_obj(), attr) File "/usr/lib/python2.6/site-packages/paste/registry.py", line 197, in _current_obj 'thread' % self.____name__) TypeError: No object (name: translator) has been registered for this thread 15:51:06,793 ERROR [luci.lib.ricci_helpers] Error receiving header from node2.XXXX.local:11111 15:51:06,793 ERROR [luci.lib.ricci_helpers] Error retrieving batch number from node3.XXXXX.local: Error receiving header from node3.XXXXX.local:11111 Cluster config I am trying to push: <?xml version="1.0"?> <cluster config_version="27" name="Xanadu"> <clusternodes> <clusternode name="xanadunode1" nodeid="1"> <fence> <method name="Method"> <device name="VMWare_Fence" port="XanaduNode1" ssl="on"/> </method> </fence> </clusternode> <clusternode name="xanadunode2" nodeid="2"> <fence> <method name="Method"> <device name="VMWare_Fence" port="XanaduNode2" ssl="on"/> </method> </fence> </clusternode> <clusternode name="xanadunode3" nodeid="3"> <fence> <method name="Method"> <device name="VMWare_Fence" port="XanaduNode3" ssl="on"/> </method> </fence> </clusternode> </clusternodes> <cman expected_votes="6"/> <fencedevices> <fencedevice agent="fence_vmware_soap" ipaddr="vsphere.XXXXXX.local" login="vmwarefence" name="VMWare_Fence" passwd="XXXXXXXX"/> </fencedevices> <quorumd label="quorum" votes="3"/> </cluster> Running config: <?xml version="1.0"?> <cluster config_version="26" name="Xanadu"> <clusternodes> <clusternode name="xanadunode1" nodeid="1"> <fence> <method name="Method"> <device name="VMWare_Fence" port="XanaduNode1" ssl="on"/> </method> </fence> </clusternode> <clusternode name="xanadunode2" nodeid="2"> <fence> <method name="Method"> <device name="VMWare_Fence" port="XanaduNode2" ssl="on"/> </method> </fence> </clusternode> <clusternode name="xanadunode3" nodeid="3"> <fence> <method name="Method"> <device name="VMWare_Fence" port="XanaduNode3" ssl="on"/> </method> </fence> </clusternode> </clusternodes> <cman expected_votes="5"/> <fencedevices> <fencedevice agent="fence_vmware_soap" ipaddr="vsphere.XXXXX.local" login="vmwarefence" name="VMWare_Fence" passwd="XXXXXX"/> </fencedevices> <quorumd label="quorum" votes="2"/> </cluster> Any ideas? SCP and reboots are fun and all, but I would love Ricci to work.
Can you try using ccs to get the current configuration of that node: ccs -h <host name> --getconf As well as use ccs to try and set the conf on that node? ccs -f <cluster.conf file> -h <host name> --setconf This should let us narrow down whether it's an issue with ricci or luci. Thanks! Chris
Thanks! -- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster
-- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster