On 30/08/2016 10:14, Wanpeng Li wrote: > From: Wanpeng Li <wanpeng.li@xxxxxxxxxxx> > > TSC_OFFSET will be adjusted if discovers TSC backward during vCPU load. > The preemption timer which will leverage guest tsc to reprogram its > preemption timer value is also reprogrammed if vCPU is scheded in to > a different pCPU. However, current implementation reprogram preemption > timer before TSC_OFFSET is adjusted to the right value, this will result > in preemption timer also backward and fire prematurity. > > This patch fix it by adjusting TSC_OFFSET before reprogramming preemption > timer if TSC backward. > > Cc: Paolo Bonzini <pbonzini@xxxxxxxxxx> > Cc: Radim Krčmář <rkrcmar@xxxxxxxxxx> > Cc: Yunhong Jiang <yunhong.jiang@xxxxxxxxx> > Signed-off-by: Wanpeng Li <wanpeng.li@xxxxxxxxxxx> > --- > arch/x86/kvm/x86.c | 8 ++++---- > 1 file changed, 4 insertions(+), 4 deletions(-) > > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index 19f9f9e..699f872 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -2743,16 +2743,16 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) > if (tsc_delta < 0) > mark_tsc_unstable("KVM discovered backwards TSC"); > > - if (kvm_lapic_hv_timer_in_use(vcpu) && > - kvm_x86_ops->set_hv_timer(vcpu, > - kvm_get_lapic_tscdeadline_msr(vcpu))) > - kvm_lapic_switch_to_sw_timer(vcpu); > if (check_tsc_unstable()) { > u64 offset = kvm_compute_tsc_offset(vcpu, > vcpu->arch.last_guest_tsc); > kvm_x86_ops->write_tsc_offset(vcpu, offset); > vcpu->arch.tsc_catchup = 1; > } > + if (kvm_lapic_hv_timer_in_use(vcpu) && > + kvm_x86_ops->set_hv_timer(vcpu, > + kvm_get_lapic_tscdeadline_msr(vcpu))) > + kvm_lapic_switch_to_sw_timer(vcpu); > /* > * On a host with synchronized TSC, there is no need to update > * kvmclock on vcpu->cpu migration > Queued for 4.8, thanks. Paolo -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html