Re: x86: kvm: Revert "remove sched notifier for cross-cpu migrations"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Mar 25, 2015 at 03:48:02PM -0700, Andy Lutomirski wrote:
> On Wed, Mar 25, 2015 at 3:41 PM, Marcelo Tosatti <mtosatti@xxxxxxxxxx> wrote:
> > On Wed, Mar 25, 2015 at 03:33:10PM -0700, Andy Lutomirski wrote:
> >> On Mar 25, 2015 2:29 PM, "Marcelo Tosatti" <mtosatti@xxxxxxxxxx> wrote:
> >> >
> >> > On Wed, Mar 25, 2015 at 01:52:15PM +0100, Radim Krčmář wrote:
> >> > > 2015-03-25 12:08+0100, Radim Krčmář:
> >> > > > Reverting the patch protects us from any migration, but I don't think we
> >> > > > need to care about changing VCPUs as long as we read a consistent data
> >> > > > from kvmclock.  (VCPU can change outside of this loop too, so it doesn't
> >> > > > matter if we return a value not fit for this VCPU.)
> >> > > >
> >> > > > I think we could drop the second __getcpu if our kvmclock was being
> >> > > > handled better;  maybe with a patch like the one below:
> >> > >
> >> > > The second __getcpu is not neccessary, but I forgot about rdtsc.
> >> > > We need to either use rtdscp, know the host has synchronized tsc, or
> >> > > monitor VCPU migrations.  Only the last one works everywhere.
> >> >
> >> > The vdso code is only used if host has synchronized tsc.
> >> >
> >> > But you have to handle the case where host goes from synchronized tsc to
> >> > unsynchronized tsc (see the clocksource notifier in the host side).
> >> >
> >>
> >> Can't we change the host to freeze all vcpus and clear the stable bit
> >> on all of them if this happens?  This would simplify and speed up
> >> vclock_gettime.
> >>
> >> --Andy
> >
> > Seems interesting to do on 512-vcpus, but sure, could be done.
> >
> 
> If you have a 512-vcpu system that switches between stable and
> unstable more than once per migration, then I expect that you have
> serious problems and this is the least of your worries.
> 
> Personally, I'd *much* rather we just made vcpu 0's pvti authoritative
> if we're stable.  If nothing else, I'm not even remotely convinced
> that the current scheme gives monotonic timing due to skew between
> when the updates happen on different vcpus.

Can you write down the problem ?

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux