[RFC PATCH 1/4] mm: munmap optimise single threaded page freeing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



In case a single threaded process is zapping its own mappings, there
should be no concurrent memory accesses through the TLBs, and so it
is safe to free pages immediately rather than batch them up.
---
 mm/memory.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/mm/memory.c b/mm/memory.c
index 135d18b31e44..773d588b371d 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -296,6 +296,15 @@ bool __tlb_remove_page_size(struct mmu_gather *tlb, struct page *page, int page_
 	VM_BUG_ON(!tlb->end);
 	VM_WARN_ON(tlb->page_size != page_size);
 
+	/*
+	 * When this is our mm and there are no other users, there can not be
+	 * a concurrent memory access.
+	 */
+	if (current->mm == tlb->mm && atomic_read(&tlb->mm->mm_users) < 2) {
+		free_page_and_swap_cache(page);
+		return false;
+	}
+
 	batch = tlb->active;
 	/*
 	 * Add the page and check if we are full. If so
-- 
2.17.0




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux