Re: [PATCH v2 12/13] KVM: x86: add SMM to the MMU role, support SMRAM address space

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 05/28/2015 01:05 AM, Paolo Bonzini wrote:
This is now very simple to do.  The only interesting part is a simple
trick to find the right memslot in gfn_to_rmap, retrieving the address
space from the spte role word.  The same trick is used in the auditing
code.

The comment on top of union kvm_mmu_page_role has been stale forever,

Fortunately, we have documented these fields in mmu.txt, please do it for
'smm' as well. :)

so remove it.  Speaking of stale code, remove pad_for_nice_hex_output
too: it was splitting the "access" bitfield across two bytes and thus
had effectively turned into pad_for_ugly_hex_output.

Signed-off-by: Paolo Bonzini <pbonzini@xxxxxxxxxx>
---
	v1->v2: new

  arch/x86/include/asm/kvm_host.h | 26 +++++++++++++++-----------
  arch/x86/kvm/mmu.c              | 15 ++++++++++++---
  arch/x86/kvm/mmu_audit.c        | 10 +++++++---
  arch/x86/kvm/x86.c              |  2 ++
  4 files changed, 36 insertions(+), 17 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 5a5e13af6e03..47006683f2fe 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -184,23 +184,12 @@ struct kvm_mmu_memory_cache {
  	void *objects[KVM_NR_MEM_OBJS];
  };

-/*
- * kvm_mmu_page_role, below, is defined as:
- *
- *   bits 0:3 - total guest paging levels (2-4, or zero for real mode)
- *   bits 4:7 - page table level for this shadow (1-4)
- *   bits 8:9 - page table quadrant for 2-level guests
- *   bit   16 - direct mapping of virtual to physical mapping at gfn
- *              used for real mode and two-dimensional paging
- *   bits 17:19 - common access permissions for all ptes in this shadow page
- */
  union kvm_mmu_page_role {
  	unsigned word;
  	struct {
  		unsigned level:4;
  		unsigned cr4_pae:1;
  		unsigned quadrant:2;
-		unsigned pad_for_nice_hex_output:6;
  		unsigned direct:1;
  		unsigned access:3;
  		unsigned invalid:1;
@@ -208,6 +197,15 @@ union kvm_mmu_page_role {
  		unsigned cr0_wp:1;
  		unsigned smep_andnot_wp:1;
  		unsigned smap_andnot_wp:1;
+		unsigned :8;
+
+		/*
+		 * This is left at the top of the word so that
+		 * kvm_memslots_for_spte_role can extract it with a
+		 * simple shift.  While there is room, give it a whole
+		 * byte so it is also faster to load it from memory.
+		 */
+		unsigned smm:8;

I suspect if we really need this trick, smm is not the hottest filed in this
struct anyway.

Otherwise looks good to me:
Reviewed-by: Xiao Guangrong <guangrong.xiao@xxxxxxxxxxxxxxx>
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux