答复: [PATCH][v3] KVM: x86: Support the vCPU preemption check with nopvspin and realtime hint

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> -----邮件原件-----
> 发件人: Paolo Bonzini <pbonzini@xxxxxxxxxx>
> 发送时间: 2022年3月9日 17:29
> 收件人: Li,Rongqing <lirongqing@xxxxxxxxx>; seanjc@xxxxxxxxxx;
> vkuznets@xxxxxxxxxx; jmattson@xxxxxxxxxx; x86@xxxxxxxxxx;
> kvm@xxxxxxxxxxxxxxx; wanpengli@xxxxxxxxxxx
> 主题: Re: [PATCH][v3] KVM: x86: Support the vCPU preemption check with
> nopvspin and realtime hint
> 
> On 3/9/22 09:46, Li RongQing wrote:
> > If guest kernel is configured with nopvspin, or
> > CONFIG_PARAVIRT_SPINLOCK is disabled, or guest find its has dedicated
> > pCPUs from realtime hint feature, the pvspinlock will be disabled, and
> > vCPU preemption check is disabled too.
> >
> > but KVM still can emulating HLT for vCPU for both cases, and check if
> > vCPU is preempted or not, and can boost performance
> >
> > so move the setting of pv_ops.lock.vcpu_is_preempted to
> > kvm_guest_init, make it not depend on pvspinlock
> >
> > Like unixbench, single copy, vcpu with dedicated pCPU and guest kernel
> > with nopvspin, but emulating HLT for vCPU`:
> >
> > Testcase                                  Base    with patch
> > System Benchmarks Index Values            INDEX     INDEX
> > Dhrystone 2 using register variables     3278.4    3277.7
> > Double-Precision Whetstone                822.8     825.8
> > Execl Throughput                         1296.5     941.1
> > File Copy 1024 bufsize 2000 maxblocks    2124.2    2142.7
> > File Copy 256 bufsize 500 maxblocks      1335.9    1353.6
> > File Copy 4096 bufsize 8000 maxblocks    4256.3    4760.3
> > Pipe Throughput                          1050.1    1054.0
> > Pipe-based Context Switching              243.3     352.0
> > Process Creation                          820.1     814.4
> > Shell Scripts (1 concurrent)             2169.0    2086.0
> > Shell Scripts (8 concurrent)             7710.3    7576.3
> > System Call Overhead                      672.4     673.9
> >                                        ========    =======
> > System Benchmarks Index Score             1467.2   1483.0
> >
> > Signed-off-by: Li RongQing <lirongqing@xxxxxxxxx>
> > ---
> > diff v3: fix building failure when CONFIG_PARAVIRT_SPINLOCK is disable
> >           and setting preemption check only when unhalt diff v2: move
> > setting preemption check to kvm_guest_init
> >
> >   arch/x86/kernel/kvm.c | 74
> +++++++++++++++++++++++++--------------------------
> >   1 file changed, 37 insertions(+), 37 deletions(-)
> >
> > diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c index
> > d77481ec..959f919 100644
> > --- a/arch/x86/kernel/kvm.c
> > +++ b/arch/x86/kernel/kvm.c
> > @@ -752,6 +752,39 @@ static void kvm_crash_shutdown(struct pt_regs
> *regs)
> >   }
> >   #endif
> >
> > +#ifdef CONFIG_X86_32
> > +__visible bool __kvm_vcpu_is_preempted(long cpu) {
> > +	struct kvm_steal_time *src = &per_cpu(steal_time, cpu);
> > +
> > +	return !!(src->preempted & KVM_VCPU_PREEMPTED); }
> > +PV_CALLEE_SAVE_REGS_THUNK(__kvm_vcpu_is_preempted);
> > +
> > +#else
> > +
> > +#include <asm/asm-offsets.h>
> > +
> > +extern bool __raw_callee_save___kvm_vcpu_is_preempted(long);
> > +
> > +/*
> > + * Hand-optimize version for x86-64 to avoid 8 64-bit register saving
> > +and
> > + * restoring to/from the stack.
> > + */
> > +asm(
> > +".pushsection .text;"
> > +".global __raw_callee_save___kvm_vcpu_is_preempted;"
> > +".type __raw_callee_save___kvm_vcpu_is_preempted, @function;"
> > +"__raw_callee_save___kvm_vcpu_is_preempted:"
> > +"movq	__per_cpu_offset(,%rdi,8), %rax;"
> > +"cmpb	$0, " __stringify(KVM_STEAL_TIME_preempted) "+steal_time(%rax);"
> > +"setne	%al;"
> > +"ret;"
> > +".size
> __raw_callee_save___kvm_vcpu_is_preempted, .-__raw_callee_save___kvm_v
> cpu_is_preempted;"
> > +".popsection");
> > +
> > +#endif
> > +
> >   static void __init kvm_guest_init(void)
> >   {
> >   	int i;
> > @@ -764,6 +797,10 @@ static void __init kvm_guest_init(void)
> >   	if (kvm_para_has_feature(KVM_FEATURE_STEAL_TIME)) {
> >   		has_steal_clock = 1;
> >   		static_call_update(pv_steal_clock, kvm_steal_clock);
> > +
> > +		if (kvm_para_has_feature(KVM_FEATURE_PV_UNHALT))
> > +			pv_ops.lock.vcpu_is_preempted =
> > +				PV_CALLEE_SAVE(__kvm_vcpu_is_preempted);
> >   	}
> 
> Is it necessary to check PV_UNHALT?  The bit is present anyway in the steal
> time struct, unless it's a very old kernel.  And it's safe to always return zero if
> the bit is not present.
> 

I think calling _kvm_vcpu_is_preempted should be avoid in some unnecessary condition, like no unhalt, which means that vcpu do not exit for hlt and vcpu is not preempted?

-Li 




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux