Re: [PATCH] KVM: X86: set EXITING_GUEST_MODE as soon as vCPU exits

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> On Nov 30, 2022, at 11:55 PM, Chao Gao <chao.gao@xxxxxxxxx> wrote:
> 
> On Wed, Nov 30, 2022 at 02:07:57PM +0000, Jon Kohler wrote:
>> 
>> 
>>> On Nov 30, 2022, at 1:29 AM, Chao Gao <chao.gao@xxxxxxxxx> wrote:
>>> 
>> 
>> Chao while I’ve got you here, I was inspired to tune up the software side here based
>> on the VTD suppress notifications change we had been talking about. Any chance
>> we could get the v4 of that? Seemed like it was almost done, yea? Would love to 
> 
> I didn't post a new version because there is no feedback on v3. But
> considering there is a mistake in v3, I will fix it and post v4.

Ok Thanks! Looking forward to that. In between that patch and this one, should be a great
combined impact. Any chance you can apply my patch and yours together and see how
it works? I’d imagine it isn’t as applicable with IPI-v, but it’d still be interesting to see
how the test numbers work out with your benchmark with/without IPI-v to see if your
test sees a speedup here too.

> 
>> get our hands on that to help accelerate the VTD path.
>> 
>> 
>>> On Tue, Nov 29, 2022 at 01:22:25PM -0500, Jon Kohler wrote:
>>>> @@ -7031,6 +7042,18 @@ void noinstr vmx_update_host_rsp(struct vcpu_vmx *vmx, unsigned long host_rsp)
>>>> void noinstr vmx_spec_ctrl_restore_host(struct vcpu_vmx *vmx,
>>>> 					unsigned int flags)
>>>> {
>>>> +	struct kvm_vcpu *vcpu = &vmx->vcpu;
>>>> +
>>>> +	/* Optimize IPI reduction by setting mode immediately after vmexit
>>>> +	 * without a memmory barrier as this as not paired anywhere. vcpu->mode
>>>> +	 * is will be set to OUTSIDE_GUEST_MODE in x86 common code with a memory
>>>> +	 * barrier, after the host is done fully restoring various host states.
>>>> +	 * Since the rdmsr and wrmsr below are expensive, this must be done
>>>> +	 * first, so that the IPI suppression window covers the time dealing
>>>> +	 * with fixing up SPEC_CTRL.
>>>> +	 */
>>>> +	vcpu->mode = EXITING_GUEST_MODE;
>>> 
>>> Does this break kvm_vcpu_kick()? IIUC, kvm_vcpu_kick() does nothing if
>>> vcpu->mode is already EXITING_GUEST_MODE, expecting the vCPU will exit
>>> guest mode. But ...
>> 
>> IIRC that’d only be a problem for fast path exits that reenter guest (like TSC Deadline)
>> everything else *will* eventually exit out to kernel mode to pickup whatever other
>> requests may be pending. In this sense, this patch is actually even better for kick
>> because we will send incrementally less spurious kicks.
> 
> Yes. I agree.
> 
>> 
>> Even then, for fast path reentry exits, a guest is likely to exit all the way out eventually
>> for something else soon enough, so worst case something gets a wee bit more delayed
>> than it should. Small price to pay for clawing back cycles on the IPI send side I think.
> 
> Thanks for above clarification. On second thoughts, for fastpath, there is a
> call of kvm_vcpu_exit_request() before re-entry. This call guarantees that
> vCPUs will exit guest mode if any request pending. So, this change actually
> won't lead to a delay in handling pending events.

Ok thanks. I know this week tends to be a slow(er) week in the US coming back from the
Holidays, so will wait for additional review / comments here





[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux