From: Andi Kleen <ak@xxxxxxxxxxxxxxx> The original INT_MAX is too large, reduce it to - avoid unnecessarily dirtying/bouncing the cache line - restore mmap read-around faster on changed access pattern Background: in the mosbench exim benchmark which does multi-threaded page faults on shared struct file, the ra->mmap_miss updates are found to cause excessive cache line bouncing on tmpfs. The ra state updates are needless for tmpfs because it actually disabled readahead totally (shmem_backing_dev_info.ra_pages == 0). Tested-by: Tim Chen <tim.c.chen@xxxxxxxxx> Signed-off-by: Andi Kleen <ak@xxxxxxxxxxxxxxx> Signed-off-by: Wu Fengguang <fengguang.wu@xxxxxxxxx> --- mm/filemap.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) --- linux-next.orig/mm/filemap.c 2011-04-23 09:01:44.000000000 +0800 +++ linux-next/mm/filemap.c 2011-04-23 09:17:21.000000000 +0800 @@ -1538,7 +1538,8 @@ static void do_sync_mmap_readahead(struc return; } - if (ra->mmap_miss < INT_MAX) + /* Avoid banging the cache line if not needed */ + if (ra->mmap_miss < MMAP_LOTSAMISS * 10) ra->mmap_miss++; /* -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxxx For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>