Re: [RFC PATCH v2 3/6] KVM: x86/mmu: Pass round full 64-bit error code for the KVM page fault

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> From: Sean Christopherson <seanjc@xxxxxxxxxx>
> Date: Fri, 23 Jun 2023 09:46:38 -0700
> Subject: [PATCH] KVM: x86/mmu: Guard against collision with KVM-defined
>  PFERR_IMPLICIT_ACCESS
> 
> Add an assertion in kvm_mmu_page_fault() to ensure the error code provided
> by hardware doesn't conflict with KVM's software-defined IMPLICIT_ACCESS
> flag.  In the unlikely scenario that future hardware starts using bit 48
> for a hardware-defined flag, preserving the bit could result in KVM
> incorrectly interpreting the unknown flag as KVM's IMPLICIT_ACCESS flag.
> 
> WARN so that any such conflict can be surfaced to KVM developers and
> resolved, but otherwise ignore the bit as KVM can't possibly rely on a
> flag it knows nothing about.

I think the fundamental problem is we mix synthetic bit(s) with the hardware
error code together into a single 'u64'.  Given there's no guarantee from
hardware vendors (Intel/AMD) that some bits will be always reserved for software
use, there's no guarantee the synthetic bit(s) won't conflict with those
hardware defined bits.

Perhaps a fundamental fix is to use a new 'u64' as parameter for software-
defined error code passing to all relevant code paths.

But I think your fix (or detection) below should be good enough perhaps for a
long time, and even in the future when such conflict merges, we can move the
synthetic bit to another bit.  The only problem is probably we will need
relevant patch(es) back-ported to stable kernels.

> 
> Fixes: 4f4aa80e3b88 ("KVM: X86: Handle implicit supervisor access with SMAP")
> Signed-off-by: Sean Christopherson <seanjc@xxxxxxxxxx>
> ---
>  arch/x86/kvm/mmu/mmu.c | 11 +++++++++++
>  1 file changed, 11 insertions(+)
> 
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 60397a1beda3..228a483d3746 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -5742,6 +5742,17 @@ int noinline kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u64 err
>  	int r, emulation_type = EMULTYPE_PF;
>  	bool direct = vcpu->arch.mmu->root_role.direct;
>  
> +	/*
> +	 * IMPLICIT_ACCESS is a KVM-defined flag used to correctly perform SMAP
> +	 * checks when emulating instructions that triggers implicit access.
> +	 * WARN if hardware generates a fault with an error code that collides
> +	 * with the KVM-defined value.  Clear the flag and continue on, i.e.
> +	 * don't terminate the VM, as KVM can't possibly be relying on a flag
> +	 * that KVM doesn't know about.
> +	 */
> +	if (WARN_ON_ONCE(error_code & PFERR_IMPLICIT_ACCESS))
> +		error_code &= ~PFERR_IMPLICIT_ACCESS;
> +
>  	if (WARN_ON(!VALID_PAGE(vcpu->arch.mmu->root.hpa)))
>  		return RET_PF_RETRY;
>  
> 
> base-commit: 293375bf2fd333e5563dd80b894725b90cd84c5d
> --





[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux