On Wed, Mar 25, 2015 at 03:33:10PM -0700, Andy Lutomirski wrote: > On Mar 25, 2015 2:29 PM, "Marcelo Tosatti" <mtosatti@xxxxxxxxxx> wrote: > > > > On Wed, Mar 25, 2015 at 01:52:15PM +0100, Radim Krčmář wrote: > > > 2015-03-25 12:08+0100, Radim Krčmář: > > > > Reverting the patch protects us from any migration, but I don't think we > > > > need to care about changing VCPUs as long as we read a consistent data > > > > from kvmclock. (VCPU can change outside of this loop too, so it doesn't > > > > matter if we return a value not fit for this VCPU.) > > > > > > > > I think we could drop the second __getcpu if our kvmclock was being > > > > handled better; maybe with a patch like the one below: > > > > > > The second __getcpu is not neccessary, but I forgot about rdtsc. > > > We need to either use rtdscp, know the host has synchronized tsc, or > > > monitor VCPU migrations. Only the last one works everywhere. > > > > The vdso code is only used if host has synchronized tsc. > > > > But you have to handle the case where host goes from synchronized tsc to > > unsynchronized tsc (see the clocksource notifier in the host side). > > > > Can't we change the host to freeze all vcpus and clear the stable bit > on all of them if this happens? This would simplify and speed up > vclock_gettime. > > --Andy Seems interesting to do on 512-vcpus, but sure, could be done. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html