The patch titled Subject: mm: use only per-device readahead limit has been added to the -mm tree. Its filename is mm-use-only-per-device-readahead-limit.patch This patch should soon appear at http://ozlabs.org/~akpm/mmots/broken-out/mm-use-only-per-device-readahead-limit.patch and later at http://ozlabs.org/~akpm/mmotm/broken-out/mm-use-only-per-device-readahead-limit.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/SubmitChecklist when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Roman Gushchin <klamm@xxxxxxxxxxxxxx> Subject: mm: use only per-device readahead limit Maximal readahead size is limited now by two values: 1) by global 2Mb constant (MAX_READAHEAD in max_sane_readahead()) 2) by configurable per-device value* (bdi->ra_pages) There are devices, which require custom readahead limit. For instance, for RAIDs it's calculated as number of devices multiplied by chunk size times 2. Readahead size can never be larger than bdi->ra_pages * 2 value (POSIX_FADV_SEQUNTIAL doubles readahead size). If so, why do we need two limits? I suggest to completely remove this max_sane_readahead() stuff and use per-device readahead limit everywhere. Also, using right readahead size for RAID disks can significantly increase i/o performance: before: dd if=/dev/md2 of=/dev/null bs=100M count=100 100+0 records in 100+0 records out 10485760000 bytes (10 GB) copied, 12.9741 s, 808 MB/s after: $ dd if=/dev/md2 of=/dev/null bs=100M count=100 100+0 records in 100+0 records out 10485760000 bytes (10 GB) copied, 8.91317 s, 1.2 GB/s (It's an 8-disks RAID5 storage). This patch doesn't change sys_readahead and madvise(MADV_WILLNEED) behavior introduced by 6d2be915e589b58 ("mm/readahead.c: fix readahead failure for memoryless NUMA nodes and limit readahead pages"). Signed-off-by: Roman Gushchin <klamm@xxxxxxxxxxxxxx> Cc: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx> Cc: Raghavendra K T <raghavendra.kt@xxxxxxxxxxxxxxxxxx> Cc: Jan Kara <jack@xxxxxxx> Cc: Wu Fengguang <fengguang.wu@xxxxxxxxx> Cc: David Rientjes <rientjes@xxxxxxxxxx> Cc: onstantin Khlebnikov <khlebnikov@xxxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- include/linux/mm.h | 2 -- mm/filemap.c | 8 +++----- mm/readahead.c | 14 ++------------ 3 files changed, 5 insertions(+), 19 deletions(-) diff -puN include/linux/mm.h~mm-use-only-per-device-readahead-limit include/linux/mm.h --- a/include/linux/mm.h~mm-use-only-per-device-readahead-limit +++ a/include/linux/mm.h @@ -2015,8 +2015,6 @@ void page_cache_async_readahead(struct a pgoff_t offset, unsigned long size); -unsigned long max_sane_readahead(unsigned long nr); - /* Generic expand stack which grows the stack according to GROWS{UP,DOWN} */ extern int expand_stack(struct vm_area_struct *vma, unsigned long address); diff -puN mm/filemap.c~mm-use-only-per-device-readahead-limit mm/filemap.c --- a/mm/filemap.c~mm-use-only-per-device-readahead-limit +++ a/mm/filemap.c @@ -1807,7 +1807,6 @@ static void do_sync_mmap_readahead(struc struct file *file, pgoff_t offset) { - unsigned long ra_pages; struct address_space *mapping = file->f_mapping; /* If we don't want any read-ahead, don't bother */ @@ -1836,10 +1835,9 @@ static void do_sync_mmap_readahead(struc /* * mmap read-around */ - ra_pages = max_sane_readahead(ra->ra_pages); - ra->start = max_t(long, 0, offset - ra_pages / 2); - ra->size = ra_pages; - ra->async_size = ra_pages / 4; + ra->start = max_t(long, 0, offset - ra->ra_pages / 2); + ra->size = ra->ra_pages; + ra->async_size = ra->ra_pages / 4; ra_submit(ra, mapping, file); } diff -puN mm/readahead.c~mm-use-only-per-device-readahead-limit mm/readahead.c --- a/mm/readahead.c~mm-use-only-per-device-readahead-limit +++ a/mm/readahead.c @@ -213,7 +213,7 @@ int force_page_cache_readahead(struct ad if (unlikely(!mapping->a_ops->readpage && !mapping->a_ops->readpages)) return -EINVAL; - nr_to_read = max_sane_readahead(nr_to_read); + nr_to_read = min(nr_to_read, inode_to_bdi(mapping->host)->ra_pages); while (nr_to_read) { int err; @@ -232,16 +232,6 @@ int force_page_cache_readahead(struct ad return 0; } -#define MAX_READAHEAD ((512*4096)/PAGE_CACHE_SIZE) -/* - * Given a desired number of PAGE_CACHE_SIZE readahead pages, return a - * sensible upper limit. - */ -unsigned long max_sane_readahead(unsigned long nr) -{ - return min(nr, MAX_READAHEAD); -} - /* * Set the initial window size, round to next power of 2 and square * for small size, x 4 for medium, and x 2 for large @@ -380,7 +370,7 @@ ondemand_readahead(struct address_space bool hit_readahead_marker, pgoff_t offset, unsigned long req_size) { - unsigned long max = max_sane_readahead(ra->ra_pages); + unsigned long max = ra->ra_pages; pgoff_t prev_offset; /* _ Patches currently in -mm which might be from klamm@xxxxxxxxxxxxxx are mm-use-only-per-device-readahead-limit.patch -- To unsubscribe from this list: send the line "unsubscribe mm-commits" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html