Re: virtualized guest failback

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



John Ruemker wrote:
Stepan Kadlec wrote:
Hi,
I have working failover of virtualized guest. could someone give me hint how to configure the vm failover to failback after recovery? eg. vm_A runs xen01, xen01 fails, so xen02 takes over the vm_A, after xen01 is up again, the vm_A is migrated back to xen01.

I have already tried many config combinations, but without success - the vm_A always stays on the failover host.

my expected config (which still doesn't work) is:

 <rm>
    <failoverdomains>
      <failoverdomain name="xen01" restricted="1" ordered="1">
        <failoverdomainnode name="xen01.localdom" priority="2"/>
        <failoverdomainnode name="xen02.localdom" priority="1"/>
      </failoverdomain>
      <failoverdomain name="xen02" restricted="1" ordered="1">
        <failoverdomainnode name="xen01.localdom" priority="1"/>
        <failoverdomainnode name="xen02.localdom" priority="2"/>
      </failoverdomain>
    </failoverdomains>
    <resources/>

<vm autostart="1" domain="xen01" exclusive="0" migrate="live" name="vm_A" path="/etc/xen/vm" recovery="relocate"/> <vm autostart="1" domain="xen02" exclusive="0" migrate="live" name="vm_B" path="/etc/xen/vm" recovery="relocate"/>
  </rm>

any hints? thanks stepan


Your failoverdomains are setup to allow that, but it looks like you have your priorities switched. Domain xen01 prefers xen02.localdom and domain xen02 prefers xen01.localdom, since the lowest priority score in a domain is preferred. So since vm_A is in xen01 it will choose to start the guest on xen02.localdom. If that node fails it will move to xen01.localdom, and will fail back to xen02.localdom if that node returns. Switch the priorities in each domain and you should have the behavior you want


     <failoverdomain name="xen01" restricted="1" ordered="1">
       <failoverdomainnode name="xen01.localdom" priority="1"/>
       <failoverdomainnode name="xen02.localdom" priority="2"/>
     </failoverdomain>
     <failoverdomain name="xen02" restricted="1" ordered="1">
       <failoverdomainnode name="xen01.localdom" priority="2"/>
       <failoverdomainnode name="xen02.localdom" priority="1"/>
     </failoverdomain>


-John


unfortunately, even if the priorities are inversed, the failback doesn't work - the service taken over while the default node is down is never moved back after the node is recovered :-(.

current setup:

 <failoverdomains>
   <failoverdomain name="xen01" restricted="1" ordered="1">
     <failoverdomainnode name="xen01.localdom" priority="1"/>
     <failoverdomainnode name="xen02.localdom" priority="2"/>
   </failoverdomain>
   <failoverdomain name="xen02" restricted="1" ordered="1">
     <failoverdomainnode name="xen01.localdom" priority="2"/>
     <failoverdomainnode name="xen02.localdom" priority="1"/>
   </failoverdomain>
 </failoverdomains>

<vm autostart="1" domain="xen01" exclusive="0" migrate="live" name="vm_A" path="/etc/xen/vm" recovery="relocate"/> <vm autostart="1" domain="xen01" exclusive="0" migrate="live" name="vm_B" path="/etc/xen/vm" recovery="relocate"/>

<vm autostart="0" domain="xen02" exclusive="0" migrate="live" name="vm_C" path="/etc/xen/vm" recovery="relocate"/>

so vm_A and vm_B should be running preferably on xen01 and vm_C on xen02, but if I reboot xen01, both vm_A and vm_B are started (failovered) on xen02 but remain there even if xen01 comes up again.

what can be wrong?

sincerely steve

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux