2017-11-08 18:02-0800, Wanpeng Li: > From: Wanpeng Li <wanpeng.li@xxxxxxxxxxx> > > Remote flushing api's does a busy wait which is fine in bare-metal > scenario. But with-in the guest, the vcpus might have been pre-empted > or blocked. In this scenario, the initator vcpu would end up > busy-waiting for a long amount of time. > > This patch set implements para-virt flush tlbs making sure that it > does not wait for vcpus that are sleeping. And all the sleeping vcpus > flush the tlb on guest enter. > > The best result is achieved when we're overcommiting the host by running > multiple vCPUs on each pCPU. In this case PV tlb flush avoids touching > vCPUs which are not scheduled and avoid the wait on the main CPU. > > Test on a Haswell i7 desktop 4 cores (2HT), so 8 pCPUs, running ebizzy in > one linux guest. > > ebizzy -M > vanilla optimized boost > 8 vCPUs 10152 10083 -0.68% > 16 vCPUs 1224 4866 297.5% > 24 vCPUs 1109 3871 249% > 32 vCPUs 1025 3375 229.3% > > Cc: Paolo Bonzini <pbonzini@xxxxxxxxxx> > Cc: Radim Krčmář <rkrcmar@xxxxxxxxxx> > Signed-off-by: Wanpeng Li <wanpeng.li@xxxxxxxxxxx> > --- > diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c > @@ -465,6 +465,33 @@ static void __init kvm_apf_trap_init(void) > update_intr_gate(X86_TRAP_PF, async_page_fault); > } > > +static void kvm_flush_tlb_others(const struct cpumask *cpumask, > + const struct flush_tlb_info *info) > +{ > + u8 state; > + int cpu; > + struct kvm_steal_time *src; > + cpumask_t flushmask; > + > + > + cpumask_copy(&flushmask, cpumask); > + /* > + * We have to call flush only on online vCPUs. And > + * queue flush_on_enter for pre-empted vCPUs > + */ > + for_each_cpu(cpu, cpumask) { > + src = &per_cpu(steal_time, cpu); > + state = src->preempted; > + if ((state & KVM_VCPU_PREEMPTED)) { > + if (cmpxchg(&src->preempted, state, state | 1 << > + KVM_VCPU_SHOULD_FLUSH)) We won't be flushing unless the last argument reads 'state | KVM_VCPU_SHOULD_FLUSH' and the result will be the original value that should be compared with state to avoid a race that would drop running VCPU: if (cmpxchg(&src->preempted, state, state | KVM_VCPU_SHOULD_FLUSH) == state) > + cpumask_clear_cpu(cpu, &flushmask); > + } > + } > + > + native_flush_tlb_others(&flushmask, info); > +} > + > void __init kvm_guest_init(void) > { > int i;