[PATCH 6/8] KVM: MMU: Introduce free_zapped_mmu_pages() for freeing mmu pages in a list

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This will be split out from kvm_mmu_commit_zap_page() and moved out of
the protection of the mmu_lock later.

Note: kvm_mmu_isolate_page() is folded into kvm_mmu_free_page() since it
now does nothing but free sp->gfns.

Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@xxxxxxxxxxxxx>
---
 arch/x86/kvm/mmu.c |   35 +++++++++++++++++------------------
 1 files changed, 17 insertions(+), 18 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index a72c573..97d372a 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1461,27 +1461,32 @@ static inline void kvm_mod_used_mmu_pages(struct kvm *kvm, int nr)
 }
 
 /*
- * Remove the sp from shadow page cache, after call it,
- * we can not find this sp from the cache, and the shadow
- * page table is still valid.
- * It should be under the protection of mmu lock.
+ * Free the shadow page table and the sp, we can do it
+ * out of the protection of mmu lock.
  */
-static void kvm_mmu_isolate_page(struct kvm_mmu_page *sp)
+static void kvm_mmu_free_page(struct kvm_mmu_page *sp)
 {
 	ASSERT(is_empty_shadow_page(sp->spt));
+
 	if (!sp->role.direct)
 		free_page((unsigned long)sp->gfns);
+
+	list_del(&sp->link);
+	free_page((unsigned long)sp->spt);
+	kmem_cache_free(mmu_page_header_cache, sp);
 }
 
 /*
- * Free the shadow page table and the sp, we can do it
- * out of the protection of mmu lock.
+ * Free zapped mmu pages in @invalid_list.
+ * Call this after releasing mmu_lock if possible.
  */
-static void kvm_mmu_free_page(struct kvm_mmu_page *sp)
+static void free_zapped_mmu_pages(struct kvm *kvm,
+				  struct list_head *invalid_list)
 {
-	list_del(&sp->link);
-	free_page((unsigned long)sp->spt);
-	kmem_cache_free(mmu_page_header_cache, sp);
+	struct kvm_mmu_page *sp, *nsp;
+
+	list_for_each_entry_safe(sp, nsp, invalid_list, link)
+		kvm_mmu_free_page(sp);
 }
 
 static unsigned kvm_page_table_hashfn(gfn_t gfn)
@@ -2133,8 +2138,6 @@ static int kvm_mmu_prepare_zap_page(struct kvm *kvm, struct kvm_mmu_page *sp,
 static void kvm_mmu_commit_zap_page(struct kvm *kvm,
 				    struct list_head *invalid_list)
 {
-	struct kvm_mmu_page *sp, *nsp;
-
 	if (list_empty(invalid_list))
 		return;
 
@@ -2150,11 +2153,7 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm,
 	 */
 	kvm_flush_remote_tlbs(kvm);
 
-	list_for_each_entry_safe(sp, nsp, invalid_list, link) {
-		WARN_ON(!sp->role.invalid || sp->root_count);
-		kvm_mmu_isolate_page(sp);
-		kvm_mmu_free_page(sp);
-	}
+	free_zapped_mmu_pages(kvm, invalid_list);
 }
 
 /*
-- 
1.7.5.4

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux