Re: [PATCH 08/11] KVM: MMU: use page track for non-leaf shadow pages

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 12/15/2015 05:10 PM, Xiao Guangrong wrote:


On 12/15/2015 03:52 PM, Kai Huang wrote:

  static bool __mmu_gfn_lpage_is_disallowed(gfn_t gfn, int level,
@@ -2140,12 +2150,18 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu,
      hlist_add_head(&sp->hash_link,
&vcpu->kvm->arch.mmu_page_hash[kvm_page_table_hashfn(gfn)]);
      if (!direct) {
-        if (rmap_write_protect(vcpu, gfn))
+        /*
+         * we should do write protection before syncing pages
+         * otherwise the content of the synced shadow page may
+         * be inconsistent with guest page table.
+         */
+        account_shadowed(vcpu->kvm, sp);
+
+        if (level == PT_PAGE_TABLE_LEVEL &&
+              rmap_write_protect(vcpu, gfn))
              kvm_flush_remote_tlbs(vcpu->kvm);
I think your modification is good but I am little bit confused here. In account_shadowed, if sp->role.level > PT_PAGE_TABLE_LEVEL, the sp->gfn is write protected, and this is reasonable. So why
write protecting the gfn of PT_PAGE_TABLE_LEVEL here?

Because the shadow page will become 'sync' that means the shadow page will be synced with the page table in guest. So the shadow page need to be write-protected to avoid
the guest page table is changed when we do the 'sync' thing.

The shadow page need to be write-protected to avoid that guest page table is changed when we are syncing the shadow page table. See kvm_sync_pages() after doing
rmap_write_protect().
I see. So why are you treat PT_PAGE_TABLE_LEVEL gfn separately here? why this cannot be done in account_shadowed, as you did for upper level sp? Actually I am thinking whether account_shadowed is overdoing things. From it's name it should only *account* shadow sp, but now it also does write protection and disable large page mapping.

Thanks,
-Kai

  /*
* remove the guest page from the tracking pool which stops the interception * of corresponding access on that page. It is the opposed operation of @@ -134,20 +160,12 @@ void kvm_page_track_remove_page(struct kvm *kvm, gfn_t gfn,
      struct kvm_memory_slot *slot;
      int i;
-    WARN_ON(!check_mode(mode));
-
      for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) {
          slots = __kvm_memslots(kvm, i);
          slot = __gfn_to_memslot(slots, gfn);
          spin_lock(&kvm->mmu_lock);
-        update_gfn_track(slot, gfn, mode, -1);
-
-        /*
-         * allow large page mapping for the tracked page
-         * after the tracker is gone.
-         */
-        kvm_mmu_gfn_allow_lpage(slot, gfn);
+        kvm_slot_page_track_remove_page_nolock(kvm, slot, gfn, mode);
Looks you need to merge this part with patch 1, as you are modifying
kvm_page_track_{add,remove}_page here, which are introduced in your patch 1.

Indeed, it is better.



--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux