Re: [Patch v3] Enable CPU SMEP feature for KVM

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 05/26/2011 04:28 PM, Yang, Wei Y wrote:
This patchset enables a new CPU feature SMEP (Supervisor Mode Execution
Protection) in KVM. SMEP prevents kernel from executing code in application.
Updated Intel SDM describes this CPU feature. The document will be published soon.

This patchset is based on Fenghua's SMEP patch series, as referred by:
https://lkml.org/lkml/2011/5/17/523



diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 6c4dc01..7e0b2f8 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -120,7 +120,7 @@ static int FNAME(walk_addr_generic)(struct guest_walker *walker,
  				    struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
  				    gva_t addr, u32 access)
  {
-	pt_element_t pte;
+	pt_element_t pte, pte_smep;
  	pt_element_t __user *ptep_user;
  	gfn_t table_gfn;
  	unsigned index, pt_access, uninitialized_var(pte_access);
@@ -150,7 +150,11 @@ walk:
  		}
  		--walker->level;
  	}
+	pte_smep = ~0ULL;
+#else
+	pte_smep = ~0U;
  #endif

Use pte_smep = ~(pt_element_t)0;. But I don't understand why you don't use pt_access instead.


+			if (unlikely(fetch_fault&&  !user_fault))

Why are kernel fetch faults unlikely? Rather, a SMEP fault is unlikely, so annotate that instead.

+				if ((vcpu->arch.cr4&  X86_CR4_SMEP)
+						&&  (pte_smep&  PT_USER_MASK))
+					eperm = true;
+

kvm_read_cr4_bits()

  			gfn = gpte_to_gfn_lvl(pte, lvl);
  			gfn += (addr&  PT_LVL_OFFSET_MASK(lvl))>>  PAGE_SHIFT;

@@ -305,7 +316,7 @@ error:

  	walker->fault.error_code |= write_fault | user_fault;

-	if (fetch_fault&&  mmu->nx)
+	if (fetch_fault&&  (mmu->nx || (vcpu->arch.cr4&  X86_CR4_SMEP)))

Here, too.

@@ -4507,6 +4507,15 @@ static void vmx_cpuid_update(struct kvm_vcpu *vcpu)
  			}
  		}
  	}
+
+	best = kvm_find_cpuid_entry(vcpu, 7, 0);
+	if (best&&  (best->ebx&  bit(X86_FEATURE_SMEP))) {
+		if (boot_cpu_has(X86_FEATURE_SMEP))
+			vcpu->arch.cr4_reserved_bits&=
+				~((unsigned long)X86_CR4_SMEP);

Fails if cpuid is updated again to include SMEP. But I suggest to drop cr4_reserved_bits completely and just check cpuid directly in kvm_set_cr4(), if cr4.smep is changed, which should be very rare. See how XSAVE is supported, for example.

+		else
+			best->ebx&= ~(bit(X86_FEATURE_SMEP));

Not needed - x86.c already masks unsupported features.

+	}
  }

--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux