On Wed, May 15, 2013 at 08:41:54PM +0300, Gleb Natapov wrote: > On Tue, May 14, 2013 at 10:12:57AM -0300, Marcelo Tosatti wrote: > > On Tue, May 14, 2013 at 12:05:13PM +0300, Gleb Natapov wrote: > > > On Thu, May 09, 2013 at 08:21:41PM -0300, Marcelo Tosatti wrote: > > > > > > > > kvmclock updates which are isolated to a given vcpu, such as vcpu->cpu > > > > migration, should not allow system_timestamp from the rest of the vcpus > > > > to remain static. Otherwise ntp frequency correction applies to one > > > > vcpu's system_timestamp but not the others. > > > > > > > > So in those cases, request a kvmclock update for all vcpus. The worst > > > > case for a remote vcpu to update its kvmclock is then bounded by maximum > > > > nohz sleep latency. > > > > > > > Does this mean that when one vcpu is migrated all others are kicked out > > > from a guest mode? > > > > Yes, those which are in guest mode. For guests with large number of > > vcpus this is a problem, but i can't see a simpler method to fix the bug > > for now. > > > > Yes, this aspect must be improved (however, the bug incurs on timers in > > the guest taking tens of milliseconds with vcpu->pcpu pinning, which can > > be unacceptable). > Not sure I understand. With vcpu->pcpu pinning there will be no > migration. Do you mean "without" here? With vcpu->pcpu pinning there is no guarantee of kvm_arch_vcpu_load therefore no KVM_REQ_UPDATE_CLOCK. This is the problem. > If vcpu->kvm->arch.use_master_clock is false we kick vcpus on each > vcpu_load. When is it false? When - the host does not use TSC clocksource or - the vcpus TSCs are out of sync > I applied the patch since it fixes the real problem, but we need to > evaluate how it affects scalability. I'll look into ways to reduce the IPIs. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html