To do page reclamation in shrink_page_list function, there are two locks taken on a page by page basis. One is the tree lock protecting the radix tree of the page mapping and the other is the mapping->i_mmap_mutex protecting the reverse mapping of file maped pages. I tried to batch the operations on pages sharing the same lock to reduce lock contentions. The first patch batch the operations under tree lock while the second one batch the checking of file page references under the i_mmap_mutex. I managed to get 14% throughput improvement when with a workload putting heavy pressure of page cache by reading many large mmaped files simultaneously on a 8 socket Westmere server. There are some ugly hacks in the patches to pass information about whether the i_mmap_mutex is locked. Any suggestions on a better approach and reviews of the patches are appreciated. Tim Signed-off-by: Tim Chen <tim.c.chen@xxxxxxxxxxxxxxx> --- Diffstat include/linux/rmap.h | 6 +- mm/memory-failure.c | 2 +- mm/migrate.c | 4 +- mm/rmap.c | 28 ++++++---- mm/vmscan.c | 139 +++++++++++++++++++++++++++++++++++++++++++++----- 5 files changed, 147 insertions(+), 32 deletions(-) -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>