Hi all,
I am running Centos 5.2 on a two node physical
cluster with Xen virtualisation and 4 domains clustered underneath. I am see the
following in /var/log/messages on one of the physical nodes:
Oct 15 15:53:13 xen2 avahi-daemon[3363]: New
relevant interface eth0.IPv4 for mDNS.
Oct 15 15:53:13 xen2 avahi-daemon[3363]: Joining mDNS multicast group on interface eth0.IPv4 with address 10.199.10.170. Oct 15 15:53:14 xen2 avahi-daemon[3363]: Network interface enumeration completed. Oct 15 15:53:14 xen2 avahi-daemon[3363]: Registering new address record for fe80::200:ff:fe00:0 on virbr0. Oct 15 15:53:14 xen2 avahi-daemon[3363]: Registering new address record for 192.168.122.1 on virbr0. Oct 15 15:53:14 xen2 avahi-daemon[3363]: Registering new address record for fe80::202:a5ff:fed9:ef74 on eth0. Oct 15 15:53:14 xen2 avahi-daemon[3363]: Registering new address record for 10.199.10.170 on eth0. Oct 15 15:53:14 xen2 avahi-daemon[3363]: Registering HINFO record with values 'I686'/'LINUX'. Oct 15 15:53:15 xen2 avahi-daemon[3363]: Server startup complete. Host name is xen2.local. Local service cookie is 3231388299. Oct 15 15:53:16 xen2 avahi-daemon[3363]: Service "SFTP File Transfer on xen2" (/services/sftp-ssh.service) successfully established. Oct 15 15:53:23 xen2 xenstored: Checking store ... Oct 15 15:53:23 xen2 xenstored: Checking store complete. Oct 15 15:53:24 xen2 ccsd[2806]: Error: unable to evaluate xpath query "/cluster/fence_xvmd/@(null) " Oct 15 15:53:24 xen2 ccsd[2806]: Error while processing get: Invalid argument Oct 15 15:53:24 xen2 ccsd[2806]: Error: unable to evaluate xpath query "/cluster/fence_xvmd/@(null) " Oct 15 15:53:24 xen2 ccsd[2806]: Error while processing get: Invalid argument Oct 15 15:53:24 xen2 ccsd[2806]: Error: unable to evaluate xpath query "/cluster/fence_xvmd/@(null) " Oct 15 15:53:24 xen2 ccsd[2806]: Error while processing get: Invalid argument Oct 15 15:53:24 xen2 ccsd[2806]: Error: unable to evaluate xpath query "/cluster/fence_xvmd/@(null) " Oct 15 15:53:24 xen2 ccsd[2806]: Error while processing get: Invalid argument Oct 15 15:53:24 xen2 ccsd[2806]: Error: unable to evaluate xpath query "/cluster/fence_xvmd/@(null) " Oct 15 15:53:24 xen2 ccsd[2806]: Error while processing get: Invalid argument Oct 15 15:53:24 xen2 ccsd[2806]: Error: unable to evaluate xpath query "/cluster/fence_xvmd/@(null) " Oct 15 15:53:24 xen2 ccsd[2806]: Error while processing get: Invalid argument Oct 15 15:53:24 xen2 modclusterd: startup succeeded Oct 15 15:53:24 xen2 clurgmgrd[3531]: <notice> Resource Group Manager Starting Oct 15 15:53:25 xen2 oddjobd: oddjobd startup succeeded Oct 15 15:53:26 xen2 saslauthd[3885]: detach_tty : master pid is: 3885 Oct 15 15:53:26 xen2 saslauthd[3885]: ipc_init : listening on socket: /var/run/saslauthd/mux Oct 15 15:53:26 xen2 ricci: startup succeeded Oct 15 15:53:39 xen2 clurgmgrd[3531]: <notice> Starting stopped service vm:hermes Oct 15 15:53:39 xen2 clurgmgrd[3531]: <notice> Starting stopped service vm:hestia Oct 15 15:53:43 xen2 kernel: tap tap-1-51712: 2 getting info Oct 15 15:53:44 xen2 kernel: tap tap-1-51728: 2 getting info Oct 15 15:53:45 xen2 kernel: device vif1.0 entered promiscuous mode Oct 15 15:53:45 xen2 kernel: ADDRCONF(NETDEV_UP): vif1.0: link is not ready Oct 15 15:53:47 xen2 kernel: tap tap-2-51712: 2 getting info Oct 15 15:53:48 xen2 kernel: tap tap-2-51728: 2 getting info Oct 15 15:53:48 xen2 kernel: device vif2.0 entered promiscuous mode Oct 15 15:53:48 xen2 kernel: ADDRCONF(NETDEV_UP): vif2.0: link is not ready Oct 15 15:53:49 xen2 clurgmgrd[3531]: <notice> Service vm:hestia started Oct 15 15:53:49 xen2 clurgmgrd[3531]: <notice> Service vm:hermes started Oct 15 15:53:53 xen2 kernel: blktap: ring-ref 8, event-channel 11, protocol 1 (x86_32-abi) Oct 15 15:53:53 xen2 kernel: blktap: ring-ref 9, event-channel 12, protocol 1 (x86_32-abi) Oct 15 15:53:53 xen2 kernel: blktap: ring-ref 8, event-channel 11, protocol 1 (x86_32-abi) Oct 15 15:53:53 xen2 kernel: blktap: ring-ref 9, event-channel 12, protocol 1 (x86_32-abi) Oct 15 15:54:23 xen2 kernel: ADDRCONF(NETDEV_CHANGE): vif2.0: link becomes ready Oct 15 15:54:23 xen2 kernel: xenbr0: topology change detected, propagating Oct 15 15:54:23 xen2 kernel: xenbr0: port 4(vif2.0) entering forwarding state Oct 15 15:54:27 xen2 kernel: ADDRCONF(NETDEV_CHANGE): vif1.0: link becomes ready Oct 15 15:54:27 xen2 kernel: xenbr0: topology change detected, propagating Oct 15 15:54:27 xen2 kernel: xenbr0: port 3(vif1.0) entering forwarding state Oct 15 15:56:15 xen2 clurgmgrd[3531]: <notice> Resource Groups Locked My cluster.conf is as follows:
cluster.conf
100% 1734 1.7KB/s 00:00
[root@xen1 cluster]# cat /etc/cluster/cluster.conf.15102008 <?xml version="1.0"?> <cluster alias="XENCluster1" config_version="42" name="XENCluster1"> <cman expected_votes="1" two_node="1"/> <clusternodes> <clusternode name="xen1" nodeid="1" votes="1"> <fence> <method name="1"> <device name="xen1-ilo"/> </method> </fence> </clusternode> <clusternode name="xen2" nodeid="2" votes="1"> <fence> <method name="1"> <device name="xen2-ilo"/> </method> </fence> </clusternode> </clusternodes> <fencedevices> <fencedevice agent="fence_ilo" hostname="xen1-ilo" login="root" name="xen1-ilo" passwd="deckard1"/> <fencedevice agent="fence_ilo" hostname="xen2-ilo" login="root" name="xen2-ilo" passwd="deckard1"/> </fencedevices> <rm> <failoverdomains> <failoverdomain name="xen1-failover" nofailback="0" ordered="1" restricted="0"> <failoverdomainnode name="xen1" priority="1"/> </failoverdomain> <failoverdomain name="xen2-failover" nofailback="0" ordered="1" restricted="0"> <failoverdomainnode name="xen2" priority="2"/> </failoverdomain> </failoverdomains> <resources/> <vm autostart="1" domain="xen2-failover" exclusive="0" migrate="live" name="hermes" path="/guests" recovery="relocate"/> <vm autostart="1" domain="xen2-failover" exclusive="0" migrate="live" name="hestia" path="/guests" recovery="relocate"/> <vm autostart="1" domain="xen1-failover" exclusive="0" migrate="live" name="aether" path="/guests" recovery="relocate"/> <vm autostart="1" domain="xen1-failover" exclusive="0" migrate="live" name="athena" path="/guests" recovery="relocate"/> </rm> <fence_daemon clean_start="0" post_fail_delay="0" post_join_delay="3"/> <totem token="21000"/> <fence_xvmd/> </cluster> Does anybody know what these messages
mean?
My domain cluster.conf is as follows:
<?xml version="1.0"?>
<cluster alias="XENCluster2" config_version="13" name="XENCluster2"> <fence_daemon clean_start="0" post_fail_delay="0" post_join_delay="3"/> <clusternodes> <clusternode name="athena.private.lan" nodeid="1" votes="1"> <fence> <method name="1"> <device domain="athena" name="virtual_fence"/> </method> </fence> </clusternode> <clusternode name="aether.private.lan" nodeid="2" votes="1"> <fence> <method name="1"> <device domain="aether" name="virtual_fence"/> </method> </fence> </clusternode> <clusternode name="hermes.private.lan" nodeid="3" votes="1"> <fence> <method name="1"> <device domain="hermes" name="virtual_fence"/> </method> </fence> </clusternode> <clusternode name="hestia.private.lan" nodeid="4" votes="1"> <fence> <method name="1"> <device domain="hestia" name="virtual_fence"/> </method> </fence> </clusternode> </clusternodes> <cman/> <fencedevices> <fencedevice agent="fence_xvm" name="virtual_fence"/> </fencedevices> <rm> <failoverdomains/> <resources/> </rm> <fence_xvmd/> </cluster> Thanks
John
|
-- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster