Re: [Qemu-devel] [RFC] COLO HA Project proposal

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



* Dong, Eddie (eddie.dong@xxxxxxxxx) wrote:
> > >
> > > Let me clarify on this issue. COLO didn't ignore the TCP sequence
> > > number, but uses a new implementation to make the sequence number to
> > > be best effort identical between the primary VM (PVM) and secondary VM
> > > (SVM). Likely, VMM has to synchronize the emulation of randomization
> > > number generation mechanism between the PVM and SVM, like the
> > lock-stepping mechanism does.
> > >
> > > Further mnore, for long TCP connection, we can rely on the (on-demand)
> > > VM checkpoint to get the identical Sequence number both in PVM and
> > SVM.
> > 
> > That wasn't really my question; I was worrying about other forms of
> > randomness, such as winners of lock contention, and other SMP
> > non-determinisms, and I'm also worried by what proportion of time the
> > system can't recover from a failure due to being unable to distinguish an
> > SVM failure from a randomness issue.
> > 
> Thanks Dave:
> 	Whether the randomness value/branch/code path the PVM and SVM may have,
> It is only a performance issue. COLO never assumes the PVM and SVM has same internal
> Machine state.  From correctness p.o.v, as if the PVM and SVM generate
> Identical response, we can view the SVM is a valid replica of PVM, and the SVM can take over
> When the PVM suffers from hardware failure. We can view the client is all the way talking with 
> the SVM, without the notion of PVM.  Of course, if the SVM dies, we can regenerate a copy
> of PVM with a new checkpoint too.
> 	The SOCC paper has the detail recovery model :)

I've had a read; I think the bit I was asking about was what you labelled 'D' in that
papers fig.4 - so I think that does explain it for me.
But I also have some more questions:

  1) 5.3.3 Web server
    a) In fig 11 it shows Remus's performance dropping off with the number of threads - why is that? Is it
       just an increase in the amount of memory changes in each snapshot?
    b) Is fig 11/12 measured with all of the TCP optimisations shown in fig 13 on?

  2) Did you manage to overcome the issue shown in 5.6 with newer guest kernels degredation - could you just fall
     back to micro checkpointing if the guests diverge too quickly?

  3) Was the link between the two servers for synchronisation a low-latency dedicated connection?

  4) Did you try an ftp PUT benchmark using external storage - i.e. that wouldn't have the local disc
     synchronisation overhead?

Dave

> 
> Thanks, Eddie
> 
> 
> 
--
Dr. David Alan Gilbert / dgilbert@xxxxxxxxxx / Manchester, UK
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux