From: Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx> This was a suggestion from Mel: http://lkml.kernel.org/r/20120914085634.GM11157@xxxxxxxxx Any pages we collect on 'batch_for_mapping_removal' will have their lock_page() held during the duration of their stay on the list. If some other user is trying to get at them during this time, they might end up having to wait. This ensures that we drain the batch if we are about to perform a pageout() or congestion_wait(), either of which will take some time. We expect this to help mitigate the worst of the latency increase that the batching could cause. I added some statistics to the __remove_mapping_batch() code to track how large the lists are that we pass in to it. With this patch, the average list length drops about 10% (from about 4.1 to 3.8). The workload here was a make -j4 kernel compile on a VM with 200MB of RAM. I've still got the statistics patch around if anyone is interested. Signed-off-by: Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx> --- linux.git-davehans/mm/vmscan.c | 10 ++++++++++ 1 file changed, 10 insertions(+) diff -puN mm/vmscan.c~drain-batch-list-during-long-operations mm/vmscan.c --- linux.git/mm/vmscan.c~drain-batch-list-during-long-operations 2013-05-30 16:07:51.962138013 -0700 +++ linux.git-davehans/mm/vmscan.c 2013-05-30 16:07:51.966138189 -0700 @@ -1003,6 +1003,16 @@ static unsigned long shrink_page_list(st if (!sc->may_writepage) goto keep_locked; + /* + * We hold a bunch of page locks on the batch. + * pageout() can take a while, so drain the + * batch before we perform pageout. + */ + nr_reclaimed += + __remove_mapping_batch(&batch_for_mapping_rm, + &ret_pages, + &free_pages); + /* Page is dirty, try to write it out here */ switch (pageout(page, mapping, sc)) { case PAGE_KEEP: _ -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>