The patch titled Subject: readahead: fold try_context_readahead() into its single caller has been added to the -mm mm-unstable branch. Its filename is readahead-fold-try_context_readahead-into-its-single-caller.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/readahead-fold-try_context_readahead-into-its-single-caller.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Jan Kara <jack@xxxxxxx> Subject: readahead: fold try_context_readahead() into its single caller Date: Tue, 25 Jun 2024 12:18:59 +0200 try_context_readahead() has a single caller page_cache_sync_ra(). Fold the function there to make ra state modifications more obvious. No functional changes. Link: https://lkml.kernel.org/r/20240625101909.12234-9-jack@xxxxxxx Signed-off-by: Jan Kara <jack@xxxxxxx> Reviewed-by: Josef Bacik <josef@xxxxxxxxxxxxxx> Cc: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/readahead.c | 84 ++++++++++++----------------------------------- 1 file changed, 22 insertions(+), 62 deletions(-) --- a/mm/readahead.c~readahead-fold-try_context_readahead-into-its-single-caller +++ a/mm/readahead.c @@ -410,58 +410,6 @@ static unsigned long get_next_ra_size(st * it approaches max_readhead. */ -/* - * Count contiguously cached pages from @index-1 to @index-@max, - * this count is a conservative estimation of - * - length of the sequential read sequence, or - * - thrashing threshold in memory tight systems - */ -static pgoff_t count_history_pages(struct address_space *mapping, - pgoff_t index, unsigned long max) -{ - pgoff_t head; - - rcu_read_lock(); - head = page_cache_prev_miss(mapping, index - 1, max); - rcu_read_unlock(); - - return index - 1 - head; -} - -/* - * page cache context based readahead - */ -static int try_context_readahead(struct address_space *mapping, - struct file_ra_state *ra, - pgoff_t index, - unsigned long req_size, - unsigned long max) -{ - pgoff_t size; - - size = count_history_pages(mapping, index, max); - - /* - * not enough history pages: - * it could be a random read - */ - if (size <= req_size) - return 0; - - /* - * starts from beginning of file: - * it is a strong indication of long-run stream (or whole-file-read) - */ - if (size >= index) - size *= 2; - - ra->start = index; - ra->size = min(size + req_size, max); - ra->async_size = 1; - - return 1; -} - static inline int ra_alloc_folio(struct readahead_control *ractl, pgoff_t index, pgoff_t mark, unsigned int order, gfp_t gfp) { @@ -561,8 +509,8 @@ void page_cache_sync_ra(struct readahead pgoff_t index = readahead_index(ractl); bool do_forced_ra = ractl->file && (ractl->file->f_mode & FMODE_RANDOM); struct file_ra_state *ra = ractl->ra; - unsigned long max_pages; - pgoff_t prev_index; + unsigned long max_pages, contig_count; + pgoff_t prev_index, miss; /* * Even if readahead is disabled, issue this request as readahead @@ -603,16 +551,28 @@ void page_cache_sync_ra(struct readahead * Query the page cache and look for the traces(cached history pages) * that a sequential stream would leave behind. */ - if (try_context_readahead(ractl->mapping, ra, index, req_count, - max_pages)) - goto readit; - + rcu_read_lock(); + miss = page_cache_prev_miss(ractl->mapping, index - 1, max_pages); + rcu_read_unlock(); + contig_count = index - miss - 1; + /* + * Standalone, small random read. Read as is, and do not pollute the + * readahead state. + */ + if (contig_count <= req_count) { + do_page_cache_ra(ractl, req_count, 0); + return; + } /* - * standalone, small random read - * Read as is, and do not pollute the readahead state. + * File cached from the beginning: + * it is a strong indication of long-run stream (or whole-file-read) */ - do_page_cache_ra(ractl, req_count, 0); - return; + if (miss == ULONG_MAX) + contig_count *= 2; + ra->start = index; + ra->size = min(contig_count + req_count, max_pages); + ra->async_size = 1; + goto readit; initial_readahead: ra->start = index; _ Patches currently in -mm which might be from jack@xxxxxxx are revert-mm-writeback-fix-possible-divide-by-zero-in-wb_dirty_limits-again.patch mm-avoid-overflows-in-dirty-throttling-logic.patch readahead-make-sure-sync-readahead-reads-needed-page.patch filemap-fix-page_cache_next_miss-when-no-hole-found.patch readahead-properly-shorten-readahead-when-falling-back-to-do_page_cache_ra.patch readahead-drop-pointless-index-from-force_page_cache_ra.patch readahead-drop-index-argument-of-page_cache_async_readahead.patch readahead-drop-dead-code-in-page_cache_ra_order.patch readahead-drop-dead-code-in-ondemand_readahead.patch readahead-disentangle-async-and-sync-readahead.patch readahead-fold-try_context_readahead-into-its-single-caller.patch readahead-simplify-gotos-in-page_cache_sync_ra.patch