[PATCH/RFC v2 3/3] tlb: mmu_gather: use batched table free if possible

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



In case when __tlb_remove_table() is implemented via
free_page_and_swap_cache(), use free_pages_and_swap_cache_nolru() for
batch table removal.

This enables use of single release_pages() call instead of a loop
calling put_page(). This shall have better performance, especially when
memcg accounting is enabled.

Signed-off-by: Nikita Yushchenko <nikita.yushchenko@xxxxxxxxxxxxx>
---
 mm/mmu_gather.c | 17 +++++++++++++----
 1 file changed, 13 insertions(+), 4 deletions(-)

diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
index eb2f30a92462..2e75d396bbad 100644
--- a/mm/mmu_gather.c
+++ b/mm/mmu_gather.c
@@ -98,15 +98,24 @@ static inline void __tlb_remove_table(void *table)
 {
 	free_page_and_swap_cache((struct page *)table);
 }
-#endif
 
-static void __tlb_remove_table_free(struct mmu_table_batch *batch)
+static inline void __tlb_remove_tables(void **tables, int nr)
+{
+	free_pages_and_swap_cache_nolru((struct page **)tables, nr);
+}
+#else
+static inline void __tlb_remove_tables(void **tables, int nr)
 {
 	int i;
 
-	for (i = 0; i < batch->nr; i++)
-		__tlb_remove_table(batch->tables[i]);
+	for (i = 0; i < nr; i++)
+		__tlb_remove_table(tables[i]);
+}
+#endif
 
+static void __tlb_remove_table_free(struct mmu_table_batch *batch)
+{
+	__tlb_remove_tables(batch->tables, batch->nr);
 	free_page((unsigned long)batch);
 }
 
-- 
2.30.2




[Index of Archives]     [Linux Kernel]     [Kernel Newbies]     [x86 Platform Driver]     [Netdev]     [Linux Wireless]     [Netfilter]     [Bugtraq]     [Linux Filesystems]     [Yosemite Discussion]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Device Mapper]

  Powered by Linux