On Sun, Jul 10, 2011 at 03:41:20AM +0800, Raghavendra D Prabhu wrote: > page_cache_sync_readahead checks for ra->ra_pages again, so moving the check after VM_SequentialReadHint. NAK. This patch adds nothing but overheads. > --- a/mm/filemap.c > +++ b/mm/filemap.c > @@ -1566,8 +1566,6 @@ static void do_sync_mmap_readahead(struct vm_area_struct *vma, > /* If we don't want any read-ahead, don't bother */ > if (VM_RandomReadHint(vma)) > return; > - if (!ra->ra_pages) > - return; > > if (VM_SequentialReadHint(vma)) { > page_cache_sync_readahead(mapping, ra, file, offset, > @@ -1575,6 +1573,9 @@ static void do_sync_mmap_readahead(struct vm_area_struct *vma, > return; > } > > + if (!ra->ra_pages) > + return; > + page_cache_sync_readahead() has the same if (!ra->ra_pages) return; So the patch adds the call into page_cache_sync_readahead() just to return.. Thanks, Fengguang -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>