The BUG_ON that checks whether the ractl is still in sync with the local variables can trigger under some fairly unusual circumstances. Remove the BUG_ON and resync the loop counter after every call to read_pages(). One way I've seen to trigger it is: - Start out with a partially populated range in the page cache - Allocate some pages and run into an existing page - Send the read request off to the filesystem - The page we ran into is removed from the page cache - readahead_expand() succeeds in expanding upwards - Return to page_cache_ra_unbounded() and we hit the BUG_ON, as nr_pages has been adjusted upwards. Reported-by: Jeff Layton <jlayton@xxxxxxxxxx> Signed-off-by: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> --- mm/readahead.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/mm/readahead.c b/mm/readahead.c index f02dbebf1cef..989a8e710100 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -198,8 +198,6 @@ void page_cache_ra_unbounded(struct readahead_control *ractl, for (i = 0; i < nr_to_read; i++) { struct page *page = xa_load(&mapping->i_pages, index + i); - BUG_ON(index + i != ractl->_index + ractl->_nr_pages); - if (page && !xa_is_value(page)) { /* * Page already present? Kick off the current batch @@ -210,6 +208,7 @@ void page_cache_ra_unbounded(struct readahead_control *ractl, * not worth getting one just for that. */ read_pages(ractl, &page_pool, true); + i = ractl->_index + ractl->_nr_pages - index; continue; } @@ -223,6 +222,7 @@ void page_cache_ra_unbounded(struct readahead_control *ractl, gfp_mask) < 0) { put_page(page); read_pages(ractl, &page_pool, true); + i = ractl->_index + ractl->_nr_pages - index; continue; } if (i == nr_to_read - lookahead_size) -- 2.30.2