The patch titled Subject: mm: move end_index check out of readahead loop has been removed from the -mm tree. Its filename was mm-move-end_index-check-out-of-readahead-loop.patch This patch was dropped because it was merged into mainline or a subsystem tree ------------------------------------------------------ From: "Matthew Wilcox (Oracle)" <willy@xxxxxxxxxxxxx> Subject: mm: move end_index check out of readahead loop By reducing nr_to_read, we can eliminate this check from inside the loop. Link: http://lkml.kernel.org/r/20200414150233.24495-13-willy@xxxxxxxxxxxxx Signed-off-by: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> Reviewed-by: John Hubbard <jhubbard@xxxxxxxxxx> Reviewed-by: William Kucharski <william.kucharski@xxxxxxxxxx> Cc: Chao Yu <yuchao0@xxxxxxxxxx> Cc: Christoph Hellwig <hch@xxxxxx> Cc: Cong Wang <xiyou.wangcong@xxxxxxxxx> Cc: Darrick J. Wong <darrick.wong@xxxxxxxxxx> Cc: Dave Chinner <dchinner@xxxxxxxxxx> Cc: Eric Biggers <ebiggers@xxxxxxxxxx> Cc: Gao Xiang <gaoxiang25@xxxxxxxxxx> Cc: Jaegeuk Kim <jaegeuk@xxxxxxxxxx> Cc: Joseph Qi <joseph.qi@xxxxxxxxxxxxxxxxx> Cc: Junxiao Bi <junxiao.bi@xxxxxxxxxx> Cc: Michal Hocko <mhocko@xxxxxxxx> Cc: Zi Yan <ziy@xxxxxxxxxx> Cc: Johannes Thumshirn <johannes.thumshirn@xxxxxxx> Cc: Miklos Szeredi <mszeredi@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/readahead.c | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-) --- a/mm/readahead.c~mm-move-end_index-check-out-of-readahead-loop +++ a/mm/readahead.c @@ -167,8 +167,6 @@ void __do_page_cache_readahead(struct ad unsigned long lookahead_size) { struct inode *inode = mapping->host; - struct page *page; - unsigned long end_index; /* The last page we want to read */ LIST_HEAD(page_pool); loff_t isize = i_size_read(inode); gfp_t gfp_mask = readahead_gfp_mask(mapping); @@ -178,22 +176,26 @@ void __do_page_cache_readahead(struct ad ._index = index, }; unsigned long i; + pgoff_t end_index; /* The last page we want to read */ if (isize == 0) return; - end_index = ((isize - 1) >> PAGE_SHIFT); + end_index = (isize - 1) >> PAGE_SHIFT; + if (index > end_index) + return; + /* Don't read past the page containing the last byte of the file */ + if (nr_to_read > end_index - index) + nr_to_read = end_index - index + 1; /* * Preallocate as many pages as we will need. */ for (i = 0; i < nr_to_read; i++) { - if (index + i > end_index) - break; + struct page *page = xa_load(&mapping->i_pages, index + i); BUG_ON(index + i != rac._index + rac._nr_pages); - page = xa_load(&mapping->i_pages, index + i); if (page && !xa_is_value(page)) { /* * Page already present? Kick off the current batch of _ Patches currently in -mm which might be from willy@xxxxxxxxxxxxx are mm-simplify-calling-a-compound-page-destructor.patch ipc-convert-ipcs_idr-to-xarray.patch ipc-convert-ipcs_idr-to-xarray-update.patch