It looks like no. can you send an output of clustat of when the VM is running on multiple nodes at the same time? And by the way, another one after having stopped (clusvcadm -s vm:guest1) ? 2009/9/25 Paras pradhan <pradhanparas@xxxxxxxxx>: > Anyone having issue as mine? Virtual machine service is not being > properly handled by the cluster. > > > Thanks > Paras. > > On Mon, Sep 21, 2009 at 9:55 AM, Paras pradhan <pradhanparas@xxxxxxxxx> wrote: >> Ok.. here is my cluster.conf file >> >> -- >> [root@cvtst1 cluster]# more cluster.conf >> <?xml version="1.0"?> >> <cluster alias="test" config_version="9" name="test"> >> <fence_daemon clean_start="0" post_fail_delay="0" post_join_delay="3"/> >> <clusternodes> >> <clusternode name="cvtst2" nodeid="1" votes="1"> >> <fence/> >> </clusternode> >> <clusternode name="cvtst1" nodeid="2" votes="1"> >> <fence/> >> </clusternode> >> <clusternode name="cvtst3" nodeid="3" votes="1"> >> <fence/> >> </clusternode> >> </clusternodes> >> <cman/> >> <fencedevices/> >> <rm> >> <failoverdomains> >> <failoverdomain name="myfd1" nofailback="0" ordered="1" restricted="0"> >> <failoverdomainnode name="cvtst2" priority="3"/> >> <failoverdomainnode name="cvtst1" priority="1"/> >> <failoverdomainnode name="cvtst3" priority="2"/> >> </failoverdomain> >> </failoverdomains> >> <resources/> >> <vm autostart="1" domain="myfd1" exclusive="0" max_restarts="0" >> name="guest1" path="/vms" recovery="r >> estart" restart_expire_time="0"/> >> </rm> >> </cluster> >> [root@cvtst1 cluster]# >> ------ >> >> Thanks! >> Paras. >> >> >> On Sun, Sep 20, 2009 at 9:44 AM, Volker Dormeyer <volker@xxxxxxxxxxxx> wrote: >>> On Fri, Sep 18, 2009 at 05:08:57PM -0500, >>> Paras pradhan <pradhanparas@xxxxxxxxx> wrote: >>>> I am using cluster suite for HA of xen virtual machines. Now I am >>>> having another problem. When I start the my xen vm in one node, it >>>> also starts on other nodes. Which daemon controls this? >>> >>> This is usually done bei clurgmgrd (which is part of the rgmanager >>> package). To me, this sounds like a configuration problem. Maybe, >>> you can post your cluster.conf? >>> >>> Regards, >>> Volker >>> >>> -- >>> Linux-cluster mailing list >>> Linux-cluster@xxxxxxxxxx >>> https://www.redhat.com/mailman/listinfo/linux-cluster >>> >> > > -- > Linux-cluster mailing list > Linux-cluster@xxxxxxxxxx > https://www.redhat.com/mailman/listinfo/linux-cluster > -- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster