Re: [PATCH 2/5 v4] KVM: nSVM: Check addresses of MSR and IO permission maps

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 3/24/21 12:15 PM, Sean Christopherson wrote:
On Wed, Mar 24, 2021, Krish Sadhukhan wrote:
According to section "Canonicalization and Consistency Checks" in APM vol 2,
the following guest state is illegal:

     "The MSR or IOIO intercept tables extend to a physical address that
      is greater than or equal to the maximum supported physical address."

Suggested-by: Paolo Bonzini <pbonzini@xxxxxxxxxx>
Signed-off-by: Krish Sadhukhan <krish.sadhukhan@xxxxxxxxxx>
---
  arch/x86/kvm/svm/nested.c | 28 +++++++++++++++++++++-------
  1 file changed, 21 insertions(+), 7 deletions(-)

diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index 35891d9a1099..b08d1c595736 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -231,7 +231,15 @@ static bool svm_get_nested_state_pages(struct kvm_vcpu *vcpu)
  	return true;
  }
-static bool nested_vmcb_check_controls(struct vmcb_control_area *control)
+static bool nested_svm_check_bitmap_pa(struct kvm_vcpu *vcpu, u64 pa,
+				       u8 order)
+{
+	u64 last_pa = PAGE_ALIGN(pa) + (PAGE_SIZE << order) - 1;
Ugh, I really wish things that "must" happen were actually enforced by hardware.

   The MSRPM must be aligned on a 4KB boundary... The VMRUN instruction ignores
   the lower 12 bits of the address specified in the VMCB.

So, ignoring an unaligned @pa is correct, but that means
nested_svm_exit_handled_msr() and nested_svm_intercept_ioio() are busted.


How about we call PAGE_ALIGN() on the addresses where they are allocated i.e., in svm_vcpu_alloc_msrpm() and in svm_hardware_setup() ? That way even if we are not checking for alignment here, we are still good.


+	return last_pa > pa && !(last_pa >> cpuid_maxphyaddr(vcpu));
Please use kvm_vcpu_is_legal_gpa().

+}
+
+static bool nested_vmcb_check_controls(struct kvm_vcpu *vcpu,
+				       struct vmcb_control_area *control)
  {
  	if ((vmcb_is_intercept(control, INTERCEPT_VMRUN)) == 0)
  		return false;
@@ -243,12 +251,18 @@ static bool nested_vmcb_check_controls(struct vmcb_control_area *control)
  	    !npt_enabled)
  		return false;
+ if (!nested_svm_check_bitmap_pa(vcpu, control->msrpm_base_pa,
+	    MSRPM_ALLOC_ORDER))
+		return false;
+	if (!nested_svm_check_bitmap_pa(vcpu, control->iopm_base_pa,
+	    IOPM_ALLOC_ORDER))
I strongly dislike using the alloc orders, relying on kernel behavior to
represent architectural values it sketchy.  Case in point, IOPM_ALLOC_ORDER is a
16kb size, whereas the actual size of the IOPM is 12kb.


You're right, the IOPM check is wrong.

  I also called this out
in v1...

https://urldefense.com/v3/__https://lkml.kernel.org/r/YAd9MBkpDjC1MY6E@google.com__;!!GqivPVa7Brio!PkV46MQtWW8toodVKSwtWy_wKBPlsT8ME0Y_NND8Xs05NJir6WSNS4ndmhVuqW9N3Jef$


OK, I will define the actual size.

BTW, can we can switch to alloc_pages_exact() instead of alloc_pages() for allocating the IOPM bitmap ? The IOPM stays allocated throughout the lifetime of the guest and hence it won't impact performance much.

+		return false;
+
  	return true;
  }
-static bool nested_vmcb_checks(struct vcpu_svm *svm, struct vmcb *vmcb12)
+static bool nested_vmcb_checks(struct kvm_vcpu *vcpu, struct vmcb *vmcb12)
  {
-	struct kvm_vcpu *vcpu = &svm->vcpu;
  	bool vmcb12_lma;
if ((vmcb12->save.efer & EFER_SVME) == 0)
@@ -268,10 +282,10 @@ static bool nested_vmcb_checks(struct vcpu_svm *svm, struct vmcb *vmcb12)
  		    kvm_vcpu_is_illegal_gpa(vcpu, vmcb12->save.cr3))
  			return false;
  	}
-	if (!kvm_is_valid_cr4(&svm->vcpu, vmcb12->save.cr4))
+	if (!kvm_is_valid_cr4(vcpu, vmcb12->save.cr4))
  		return false;
- return nested_vmcb_check_controls(&vmcb12->control);
+	return nested_vmcb_check_controls(vcpu, &vmcb12->control);
  }
static void load_nested_vmcb_control(struct vcpu_svm *svm,
@@ -515,7 +529,7 @@ int nested_svm_vmrun(struct vcpu_svm *svm)
  	if (WARN_ON_ONCE(!svm->nested.initialized))
  		return -EINVAL;
- if (!nested_vmcb_checks(svm, vmcb12)) {
+	if (!nested_vmcb_checks(&svm->vcpu, vmcb12)) {
Please use @vcpu directly.


It's all cleaned up in patch# 3.

   Looks like this needs a rebase, as the prototype for
nested_svm_vmrun() is wrong relative to kvm/queue.

  		vmcb12->control.exit_code    = SVM_EXIT_ERR;
  		vmcb12->control.exit_code_hi = 0;
  		vmcb12->control.exit_info_1  = 0;
@@ -1191,7 +1205,7 @@ static int svm_set_nested_state(struct kvm_vcpu *vcpu,
  		goto out_free;
ret = -EINVAL;
-	if (!nested_vmcb_check_controls(ctl))
+	if (!nested_vmcb_check_controls(vcpu, ctl))
  		goto out_free;
/*
--
2.27.0




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux