On Wed, 3 Jul 2019 at 08:47, Wanpeng Li <kernellwp@xxxxxxxxx> wrote: > > On Wed, 3 Jul 2019 at 06:23, Marcelo Tosatti <mtosatti@xxxxxxxxxx> wrote: > > > > On Tue, Jul 02, 2019 at 06:38:56PM +0200, Paolo Bonzini wrote: > > > On 21/06/19 11:39, Wanpeng Li wrote: > > > > Dedicated instances are currently disturbed by unnecessary jitter due > > > > to the emulated lapic timers fire on the same pCPUs which vCPUs resident. > > > > There is no hardware virtual timer on Intel for guest like ARM. Both > > > > programming timer in guest and the emulated timer fires incur vmexits. > > > > This patchset tries to avoid vmexit which is incurred by the emulated > > > > timer fires in dedicated instance scenario. > > > > > > > > When nohz_full is enabled in dedicated instances scenario, the unpinned > > > > timer will be moved to the nearest busy housekeepers after commit > > > > 9642d18eee2cd (nohz: Affine unpinned timers to housekeepers) and commit > > > > 444969223c8 ("sched/nohz: Fix affine unpinned timers mess"). However, > > > > KVM always makes lapic timer pinned to the pCPU which vCPU residents, the > > > > reason is explained by commit 61abdbe0 (kvm: x86: make lapic hrtimer > > > > pinned). Actually, these emulated timers can be offload to the housekeeping > > > > cpus since APICv is really common in recent years. The guest timer interrupt > > > > is injected by posted-interrupt which is delivered by housekeeping cpu > > > > once the emulated timer fires. > > > > > > > > The host admin should fine tuned, e.g. dedicated instances scenario w/ > > > > nohz_full cover the pCPUs which vCPUs resident, several pCPUs surplus > > > > for busy housekeeping, disable mwait/hlt/pause vmexits to keep in non-root > > > > mode, ~3% redis performance benefit can be observed on Skylake server. > > > > > > Marcelo, > > > > > > does this patch work for you or can you still see the oops? > > > > Hi Paolo, > > > > No more oopses with kvm/queue. Can you include: > > Cool, thanks for the confirm, Marcelo! > > > > > Index: kvm/arch/x86/kvm/lapic.c > > =================================================================== > > --- kvm.orig/arch/x86/kvm/lapic.c > > +++ kvm/arch/x86/kvm/lapic.c > > @@ -124,8 +124,7 @@ static inline u32 kvm_x2apic_id(struct k > > > > bool posted_interrupt_inject_timer(struct kvm_vcpu *vcpu) > > { > > - return pi_inject_timer && kvm_vcpu_apicv_active(vcpu) && > > - kvm_hlt_in_guest(vcpu->kvm); > > + return pi_inject_timer && kvm_vcpu_apicv_active(vcpu); > > } > > EXPORT_SYMBOL_GPL(posted_interrupt_inject_timer); > > > > However, for some reason (hrtimer subsystems responsability) with cyclictest -i 200 > > on the guest, the timer runs on the local CPU: > > > > CPU 1/KVM-9454 [003] d..2 881.674196: get_nohz_timer_target: get_nohz_timer_target 3->0 > > CPU 1/KVM-9454 [003] d..2 881.674200: get_nohz_timer_target: get_nohz_timer_target 3->0 > > CPU 1/KVM-9454 [003] d.h. 881.674387: apic_timer_fn <-__hrtimer_run_queues > > CPU 1/KVM-9454 [003] d..2 881.674393: get_nohz_timer_target: get_nohz_timer_target 3->0 > > CPU 1/KVM-9454 [003] d..2 881.674395: get_nohz_timer_target: get_nohz_timer_target 3->0 > > CPU 1/KVM-9454 [003] d..2 881.674399: get_nohz_timer_target: get_nohz_timer_target 3->0 > > CPU 1/KVM-9454 [003] d.h. 881.674586: apic_timer_fn <-__hrtimer_run_queues > > CPU 1/KVM-9454 [003] d..2 881.674593: get_nohz_timer_target: get_nohz_timer_target 3->0 > > CPU 1/KVM-9454 [003] d..2 881.674595: get_nohz_timer_target: get_nohz_timer_target 3->0 > > CPU 1/KVM-9454 [003] d..2 881.674599: get_nohz_timer_target: get_nohz_timer_target 3->0 > > CPU 1/KVM-9454 [003] d.h. 881.674787: apic_timer_fn <-__hrtimer_run_queues > > CPU 1/KVM-9454 [003] d..2 881.674793: get_nohz_timer_target: get_nohz_timer_target 3->0 > > CPU 1/KVM-9454 [003] d..2 881.674795: get_nohz_timer_target: get_nohz_timer_target 3->0 > > > > But on boot: > > > > CPU 1/KVM-9454 [003] d..2 578.625394: get_nohz_timer_target: get_nohz_timer_target 3->0 > > <idle>-0 [000] d.h1 578.626390: apic_timer_fn <-__hrtimer_run_queues > > <idle>-0 [000] d.h1 578.626394: apic_timer_fn<-__hrtimer_run_queues > > CPU 1/KVM-9454 [003] d..2 578.626401: get_nohz_timer_target: get_nohz_timer_target 3->0 > > <idle>-0 [000] d.h1 578.628397: apic_timer_fn <-__hrtimer_run_queues > > CPU 1/KVM-9454 [003] d..2 578.628407: get_nohz_timer_target: get_nohz_timer_target 3->0 > > <idle>-0 [000] d.h1 578.631403: apic_timer_fn <-__hrtimer_run_queues > > CPU 1/KVM-9454 [003] d..2 578.631413: get_nohz_timer_target: get_nohz_timer_target 3->0 > > <idle>-0 [000] d.h1 578.635409: apic_timer_fn <-__hrtimer_run_queues > > CPU 1/KVM-9454 [003] d..2 578.635419: get_nohz_timer_target: get_nohz_timer_target 3->0 > > <idle>-0 [000] d.h1 578.640415: apic_timer_fn <-__hrtimer_run_queues > > You have an idle housekeeping cpu(cpu 0), however, most of > housekeeping cpus will be busy in product environment to avoid to > waste money. get_nohz_timer_target() will find a busy housekeeping cpu > but the timer migration will fail if the timer is the first expiring > timer on the new target(as the comments above the function > switch_hrtimer_base()). Please try taskset -c 0 stress --cpu 1 on your > host, you can observe(through /proc/timer_list) apic_timer_fn running > on cpu 0 most of the time and sporadically on local cpu. Or if you have a little bigger VM/multiple VMs, the apic_timer_fn from all virtual lapics will make a housekeeping cpu busy. :) Regards, Wanpeng Li