The patch titled readahead: seeking reads method has been added to the -mm tree. Its filename is readahead-seeking-reads-method.patch See http://www.zip.com.au/~akpm/linux/patches/stuff/added-to-mm.txt to find out what to do about this ------------------------------------------------------ Subject: readahead: seeking reads method From: Wu Fengguang <wfg@xxxxxxxxxxxxxxxx> Readahead policy on read after seeking. It tries to detect sequences like: seek(), 5*read(); seek(), 6*read(); seek(), 4*read(); ... Signed-off-by: Wu Fengguang <wfg@xxxxxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxx> --- mm/readahead.c | 43 +++++++++++++++++++++++++++++++++++++++++++ 1 files changed, 43 insertions(+) diff -puN mm/readahead.c~readahead-seeking-reads-method mm/readahead.c --- 25/mm/readahead.c~readahead-seeking-reads-method Wed May 24 16:50:27 2006 +++ 25-akpm/mm/readahead.c Wed May 24 16:50:27 2006 @@ -1613,6 +1613,49 @@ try_read_backward(struct file_ra_state * } /* + * If there is a previous sequential read, it is likely to be another + * sequential read at the new position. + * + * i.e. detect the following sequences: + * seek(), 5*read(); seek(), 6*read(); seek(), 4*read(); ... + * + * Databases are known to have this seek-and-read-N-pages pattern. + */ +static int +try_readahead_on_seek(struct file_ra_state *ra, pgoff_t index, + unsigned long ra_size, unsigned long ra_max) +{ + unsigned long hit0 = ra_cache_hit(ra, 0); + unsigned long hit1 = ra_cache_hit(ra, 1) + hit0; + unsigned long hit2 = ra_cache_hit(ra, 2); + unsigned long hit3 = ra_cache_hit(ra, 3); + + /* There's a previous read-ahead request? */ + if (!ra_has_index(ra, ra->prev_page)) + return 0; + + /* The previous read-ahead sequences have similiar sizes? */ + if (!(ra_size < hit1 && hit1 > hit2 / 2 && + hit2 > hit3 / 2 && + hit3 > hit1 / 2)) + return 0; + + hit1 = max(hit1, hit2); + + /* Follow the same prefetching direction. */ + if ((ra->flags & RA_CLASS_MASK) == RA_CLASS_BACKWARD) + index = ((index > hit1 - ra_size) ? index - hit1 + ra_size : 0); + + ra_size = min(hit1, ra_max); + + ra_set_class(ra, RA_CLASS_SEEK); + ra_set_index(ra, index, index); + ra_set_size(ra, ra_size, 0); + + return 1; +} + +/* * ra_min is mainly determined by the size of cache memory. Reasonable? * * Table of concrete numbers for 4KB page size: _ Patches currently in -mm which might be from wfg@xxxxxxxxxxxxxxxx are readahead-kconfig-options.patch radixtree-look-aside-cache.patch radixtree-hole-scanning-functions.patch readahead-page-flag-pg_readahead.patch readahead-refactor-do_generic_mapping_read.patch readahead-refactor-__do_page_cache_readahead.patch readahead-insert-cond_resched-calls.patch readahead-common-macros.patch readahead-events-accounting.patch readahead-support-functions.patch readahead-sysctl-parameters.patch readahead-min-max-sizes.patch readahead-state-based-method-aging-accounting.patch readahead-state-based-method-data-structure.patch readahead-state-based-method-routines.patch readahead-state-based-method.patch readahead-context-based-method.patch readahead-initial-method-guiding-sizes.patch readahead-initial-method-thrashing-guard-size.patch readahead-initial-method-expected-read-size.patch readahead-initial-method-user-recommended-size.patch readahead-initial-method.patch readahead-backward-prefetching-method.patch readahead-seeking-reads-method.patch readahead-thrashing-recovery-method.patch readahead-call-scheme.patch readahead-laptop-mode.patch readahead-loop-case.patch readahead-nfsd-case.patch readahead-turn-on-by-default.patch readahead-debug-radix-tree-new-functions.patch readahead-debug-traces-showing-accessed-file-names.patch readahead-debug-traces-showing-read-patterns.patch - To unsubscribe from this list: send the line "unsubscribe mm-commits" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html