[RFC 39/43] shmem: optimize adding pages to the LRU in shmem_insert_pages()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Reduce LRU lock contention when inserting shmem pages by staging pages
to be added to the same LRU and adding them en masse.

Signed-off-by: Anthony Yznaga <anthony.yznaga@xxxxxxxxxx>
---
 mm/shmem.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/mm/shmem.c b/mm/shmem.c
index ca5edf580f24..678a396ba8d3 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -789,9 +789,12 @@ int shmem_insert_pages(struct mm_struct *mm, struct inode *inode, pgoff_t index,
 	struct shmem_sb_info *sbinfo = SHMEM_SB(inode->i_sb);
 	gfp_t gfp = mapping_gfp_mask(mapping);
 	struct mem_cgroup *memcg;
+	struct lru_splice splice;
 	int i, err;
 	int nr = 0;
 
+	memset(&splice, 0, sizeof(splice));
+
 	for (i = 0; i < npages; i++)
 		nr += compound_nr(pages[i]);
 
@@ -866,7 +869,7 @@ int shmem_insert_pages(struct mm_struct *mm, struct inode *inode, pgoff_t index,
 		}
 
 		if (!PageLRU(pages[i]))
-			lru_cache_add_anon(pages[i]);
+			lru_splice_add_anon(pages[i], &splice);
 
 		flush_dcache_page(pages[i]);
 		SetPageUptodate(pages[i]);
@@ -875,6 +878,9 @@ int shmem_insert_pages(struct mm_struct *mm, struct inode *inode, pgoff_t index,
 		unlock_page(pages[i]);
 	}
 
+	if (splice.pgdat)
+		add_splice_to_lru_list(&splice);
+
 	return 0;
 
 out_truncate:
-- 
2.13.3




[Index of Archives]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]     [Linux Resources]

  Powered by Linux