Re: [PATCH v3 19/34] KVM: nVMX: hyper-v: Enable L2 TLB flush

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 2022-04-14 at 15:19 +0200, Vitaly Kuznetsov wrote:
> Enable L2 TLB flush feature on nVMX when:
> - Enlightened VMCS is in use.
> - The feature flag is enabled in eVMCS.
> - The feature flag is enabled in partition assist page.
> 
> Perform synthetic vmexit to L1 after processing TLB flush call upon
> request (HV_VMX_SYNTHETIC_EXIT_REASON_TRAP_AFTER_FLUSH).
> 
> Signed-off-by: Vitaly Kuznetsov <vkuznets@xxxxxxxxxx>
> ---
>  arch/x86/kvm/vmx/evmcs.c  | 20 ++++++++++++++++++++
>  arch/x86/kvm/vmx/evmcs.h  | 10 ++++++++++
>  arch/x86/kvm/vmx/nested.c | 16 ++++++++++++++++
>  3 files changed, 46 insertions(+)
> 
> diff --git a/arch/x86/kvm/vmx/evmcs.c b/arch/x86/kvm/vmx/evmcs.c
> index e390e67496df..e0cb2e223daa 100644
> --- a/arch/x86/kvm/vmx/evmcs.c
> +++ b/arch/x86/kvm/vmx/evmcs.c
> @@ -6,6 +6,7 @@
>  #include "../hyperv.h"
>  #include "../cpuid.h"
>  #include "evmcs.h"
> +#include "nested.h"
>  #include "vmcs.h"
>  #include "vmx.h"
>  #include "trace.h"
> @@ -438,6 +439,25 @@ int nested_enable_evmcs(struct kvm_vcpu *vcpu,
>  	return 0;
>  }
>  
> +bool nested_evmcs_l2_tlb_flush_enabled(struct kvm_vcpu *vcpu)
> +{
> +	struct vcpu_vmx *vmx = to_vmx(vcpu);
> +	struct hv_enlightened_vmcs *evmcs = vmx->nested.hv_evmcs;
> +	struct hv_vp_assist_page assist_page;
> +
> +	if (!evmcs)
> +		return false;
> +
> +	if (!evmcs->hv_enlightenments_control.nested_flush_hypercall)
> +		return false;
> +
> +	if (unlikely(!kvm_hv_get_assist_page(vcpu, &assist_page)))
> +		return false;
> +
> +	return assist_page.nested_control.features.directhypercall;
> +}
> +
>  void vmx_post_hv_l2_tlb_flush(struct kvm_vcpu *vcpu)
>  {
> +	nested_vmx_vmexit(vcpu, HV_VMX_SYNTHETIC_EXIT_REASON_TRAP_AFTER_FLUSH, 0, 0);
>  }
> diff --git a/arch/x86/kvm/vmx/evmcs.h b/arch/x86/kvm/vmx/evmcs.h
> index b120b0ead4f3..ddbdb557cc53 100644
> --- a/arch/x86/kvm/vmx/evmcs.h
> +++ b/arch/x86/kvm/vmx/evmcs.h
> @@ -65,6 +65,15 @@ DECLARE_STATIC_KEY_FALSE(enable_evmcs);
>  #define EVMCS1_UNSUPPORTED_VMENTRY_CTRL (VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL)
>  #define EVMCS1_UNSUPPORTED_VMFUNC (VMX_VMFUNC_EPTP_SWITCHING)
>  
> +/*
> + * Note, Hyper-V isn't actually stealing bit 28 from Intel, just abusing it by
> + * pairing it with architecturally impossible exit reasons.  Bit 28 is set only
> + * on SMI exits to a SMI transfer monitor (STM) and if and only if a MTF VM-Exit
> + * is pending.  I.e. it will never be set by hardware for non-SMI exits (there
> + * are only three), nor will it ever be set unless the VMM is an STM.

I am sure that this will backfire this way or another. Their fault though...


I also wonder why they need that synthetic VM exit, it's in the spec,
but why I don't fully understand. Their fault as well though.

The flag that controls it is 'TlbLockCount', I wonder what it means...

> + */
> +#define HV_VMX_SYNTHETIC_EXIT_REASON_TRAP_AFTER_FLUSH 0x10000031
> +
>  struct evmcs_field {
>  	u16 offset;
>  	u16 clean_field;
> @@ -244,6 +253,7 @@ int nested_enable_evmcs(struct kvm_vcpu *vcpu,
>  			uint16_t *vmcs_version);
>  void nested_evmcs_filter_control_msr(u32 msr_index, u64 *pdata);
>  int nested_evmcs_check_controls(struct vmcs12 *vmcs12);
> +bool nested_evmcs_l2_tlb_flush_enabled(struct kvm_vcpu *vcpu);
>  void vmx_post_hv_l2_tlb_flush(struct kvm_vcpu *vcpu);
>  
>  #endif /* __KVM_X86_VMX_EVMCS_H */
> diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
> index cc6c944b5815..3e2ef5edad4a 100644
> --- a/arch/x86/kvm/vmx/nested.c
> +++ b/arch/x86/kvm/vmx/nested.c
> @@ -1170,6 +1170,17 @@ static void nested_vmx_transition_tlb_flush(struct kvm_vcpu *vcpu,
>  {
>  	struct vcpu_vmx *vmx = to_vmx(vcpu);
>  
> +	/*
> +	 * KVM_REQ_HV_TLB_FLUSH flushes entries from either L1's VP_ID or
> +	 * L2's VP_ID upon request from the guest. Make sure we check for
> +	 * pending entries for the case when the request got misplaced (e.g.
> +	 * a transition from L2->L1 happened while processing L2 TLB flush
> +	 * request or vice versa). kvm_hv_vcpu_flush_tlb() will not flush
> +	 * anything if there are no requests in the corresponding buffer.
> +	 */
> +	if (to_hv_vcpu(vcpu))
> +		kvm_make_request(KVM_REQ_HV_TLB_FLUSH, vcpu);
> +
>  	/*
>  	 * If vmcs12 doesn't use VPID, L1 expects linear and combined mappings
>  	 * for *all* contexts to be flushed on VM-Enter/VM-Exit, i.e. it's a
> @@ -5997,6 +6008,11 @@ static bool nested_vmx_l0_wants_exit(struct kvm_vcpu *vcpu,
>  		 * Handle L2's bus locks in L0 directly.
>  		 */
>  		return true;
> +	case EXIT_REASON_VMCALL:
> +		/* Hyper-V L2 TLB flush hypercall is handled by L0 */
> +		return kvm_hv_l2_tlb_flush_exposed(vcpu) &&
> +			nested_evmcs_l2_tlb_flush_enabled(vcpu) &&
> +			kvm_hv_is_tlb_flush_hcall(vcpu);
>  	default:
>  		break;
>  	}



Looks good,

Reviewed-by: Maxim Levitsky <mlevitsk@xxxxxxxxxx>

Best regards,
	Maxim Levitsky




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux