On 18/07/19 15:45, Christian Borntraeger wrote: > > > On 18.07.19 15:37, Paolo Bonzini wrote: >> From: Wanpeng Li <wanpengli@xxxxxxxxxxx> >> >> Inspired by commit 9cac38dd5d (KVM/s390: Set preempted flag during >> vcpu wakeup and interrupt delivery), we want to also boost not just >> lock holders but also vCPUs that are delivering interrupts. Most >> smp_call_function_many calls are synchronous, so the IPI target vCPUs >> are also good yield candidates. This patch introduces vcpu->ready to >> boost vCPUs during wakeup and interrupt delivery time; unlike s390 we do >> not reuse vcpu->preempted so that voluntarily preempted vCPUs are taken >> into account by kvm_vcpu_on_spin, but vmx_vcpu_pi_put is not affected >> (VT-d PI handles voluntary preemption separately, in pi_pre_block). >> >> Testing on 80 HT 2 socket Xeon Skylake server, with 80 vCPUs VM 80GB RAM: >> ebizzy -M >> >> vanilla boosting improved >> 1VM 21443 23520 9% >> 2VM 2800 8000 180% >> 3VM 1800 3100 72% >> >> Testing on my Haswell desktop 8 HT, with 8 vCPUs VM 8GB RAM, two VMs, >> one running ebizzy -M, the other running 'stress --cpu 2': >> >> w/ boosting + w/o pv sched yield(vanilla) >> >> vanilla boosting improved >> 1570 4000 155% >> >> w/ boosting + w/ pv sched yield(vanilla) >> >> vanilla boosting improved >> 1844 5157 179% >> >> w/o boosting, perf top in VM: >> >> 72.33% [kernel] [k] smp_call_function_many >> 4.22% [kernel] [k] call_function_i >> 3.71% [kernel] [k] async_page_fault >> >> w/ boosting, perf top in VM: >> >> 38.43% [kernel] [k] smp_call_function_many >> 6.31% [kernel] [k] async_page_fault >> 6.13% libc-2.23.so [.] __memcpy_avx_unaligned >> 4.88% [kernel] [k] call_function_interrupt >> >> Cc: Paolo Bonzini <pbonzini@xxxxxxxxxx> >> Cc: Radim Krčmář <rkrcmar@xxxxxxxxxx> >> Cc: Christian Borntraeger <borntraeger@xxxxxxxxxx> >> Cc: Paul Mackerras <paulus@xxxxxxxxxx> >> Cc: Marc Zyngier <maz@xxxxxxxxxx> >> Signed-off-by: Wanpeng Li <wanpengli@xxxxxxxxxxx> >> Signed-off-by: Paolo Bonzini <pbonzini@xxxxxxxxxx> >> --- >> v2->v3: put it in kvm_vcpu_wake_up, use WRITE_ONCE > > > Looks good. Some more comments > >> >> arch/s390/kvm/interrupt.c | 2 +- >> include/linux/kvm_host.h | 1 + >> virt/kvm/kvm_main.c | 9 +++++++-- > [...] > >> @@ -4205,6 +4206,8 @@ static void kvm_sched_in(struct preempt_notifier *pn, int cpu) >> >> if (vcpu->preempted) >> vcpu->preempted = false; >> + if (vcpu->ready) >> + WRITE_ONCE(vcpu->ready, false); > > What is the rationale of checking before writing. Avoiding writable cache line ping pong? I think it can be removed. The only case where you'd have ping pong is when vcpu->ready is true due to kvm_vcpu_wake_up, so it's not saving anything. >> kvm_arch_sched_in(vcpu, cpu); >> >> @@ -4216,8 +4219,10 @@ static void kvm_sched_out(struct preempt_notifier *pn, >> { >> struct kvm_vcpu *vcpu = preempt_notifier_to_vcpu(pn); >> >> - if (current->state == TASK_RUNNING) >> + if (current->state == TASK_RUNNING) { >> vcpu->preempted = true; > > WOuld it make sense to also use WRITE_ONCE for vcpu->preempted ? vcpu->preempted is not read/written anymore by other threads after this patch. > >> + WRITE_ONCE(vcpu->ready, true); >> + } >> kvm_arch_vcpu_put(vcpu); >> } >> >> >