On Mon, 6 Jan 2014 15:51:55 +0530 Raghavendra K T <raghavendra.kt@xxxxxxxxxxxxxxxxxx> wrote: > Currently, max_sane_readahead returns zero on the cpu with empty numa node, > fix this by checking for potential empty numa node case during calculation. > We also limit the number of readahead pages to 4k. > > ... > > --- a/mm/readahead.c > +++ b/mm/readahead.c > @@ -237,14 +237,25 @@ int force_page_cache_readahead(struct address_space *mapping, struct file *filp, > return ret; > } > > +#define MAX_REMOTE_READAHEAD 4096UL > /* > * Given a desired number of PAGE_CACHE_SIZE readahead pages, return a > * sensible upper limit. > */ > unsigned long max_sane_readahead(unsigned long nr) > { > - return min(nr, (node_page_state(numa_node_id(), NR_INACTIVE_FILE) > - + node_page_state(numa_node_id(), NR_FREE_PAGES)) / 2); > + unsigned long local_free_page; > + unsigned long sane_nr = min(nr, MAX_REMOTE_READAHEAD); > + > + local_free_page = node_page_state(numa_node_id(), NR_INACTIVE_FILE) > + + node_page_state(numa_node_id(), NR_FREE_PAGES); > + > + /* > + * Readahead onto remote memory is better than no readahead when local > + * numa node does not have memory. We sanitize readahead size depending > + * on free memory in the local node but limiting to 4k pages. > + */ > + return local_free_page ? min(sane_nr, local_free_page / 2) : sane_nr; > } So if the local node has two free pages, we do just one page of readahead. Then the local node has one free page and we do zero pages readahead. Assuming that bug(!) is fixed, the local node now has zero free pages and we suddenly resume doing large readahead. This transition from large readahead to very small readahead then back to large readahead is illogical, surely? -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>