On Wed, Nov 30, 2022 at 02:07:57PM +0000, Jon Kohler wrote: > > >> On Nov 30, 2022, at 1:29 AM, Chao Gao <chao.gao@xxxxxxxxx> wrote: >> > >Chao while I’ve got you here, I was inspired to tune up the software side here based >on the VTD suppress notifications change we had been talking about. Any chance >we could get the v4 of that? Seemed like it was almost done, yea? Would love to I didn't post a new version because there is no feedback on v3. But considering there is a mistake in v3, I will fix it and post v4. >get our hands on that to help accelerate the VTD path. > > >> On Tue, Nov 29, 2022 at 01:22:25PM -0500, Jon Kohler wrote: >>> @@ -7031,6 +7042,18 @@ void noinstr vmx_update_host_rsp(struct vcpu_vmx *vmx, unsigned long host_rsp) >>> void noinstr vmx_spec_ctrl_restore_host(struct vcpu_vmx *vmx, >>> unsigned int flags) >>> { >>> + struct kvm_vcpu *vcpu = &vmx->vcpu; >>> + >>> + /* Optimize IPI reduction by setting mode immediately after vmexit >>> + * without a memmory barrier as this as not paired anywhere. vcpu->mode >>> + * is will be set to OUTSIDE_GUEST_MODE in x86 common code with a memory >>> + * barrier, after the host is done fully restoring various host states. >>> + * Since the rdmsr and wrmsr below are expensive, this must be done >>> + * first, so that the IPI suppression window covers the time dealing >>> + * with fixing up SPEC_CTRL. >>> + */ >>> + vcpu->mode = EXITING_GUEST_MODE; >> >> Does this break kvm_vcpu_kick()? IIUC, kvm_vcpu_kick() does nothing if >> vcpu->mode is already EXITING_GUEST_MODE, expecting the vCPU will exit >> guest mode. But ... > >IIRC that’d only be a problem for fast path exits that reenter guest (like TSC Deadline) >everything else *will* eventually exit out to kernel mode to pickup whatever other >requests may be pending. In this sense, this patch is actually even better for kick >because we will send incrementally less spurious kicks. Yes. I agree. > >Even then, for fast path reentry exits, a guest is likely to exit all the way out eventually >for something else soon enough, so worst case something gets a wee bit more delayed >than it should. Small price to pay for clawing back cycles on the IPI send side I think. Thanks for above clarification. On second thoughts, for fastpath, there is a call of kvm_vcpu_exit_request() before re-entry. This call guarantees that vCPUs will exit guest mode if any request pending. So, this change actually won't lead to a delay in handling pending events.