On Mon, Feb 17, 2020 at 10:45:44AM -0800, Matthew Wilcox wrote: > From: "Matthew Wilcox (Oracle)" <willy@xxxxxxxxxxxxx> > > In this patch, only between __do_page_cache_readahead() and > read_pages(), but it will be extended in upcoming patches. Also add > the readahead_count() accessor. > > Signed-off-by: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> > --- > include/linux/pagemap.h | 17 +++++++++++++++++ > mm/readahead.c | 36 +++++++++++++++++++++--------------- > 2 files changed, 38 insertions(+), 15 deletions(-) > > diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h > index ccb14b6a16b5..982ecda2d4a2 100644 > --- a/include/linux/pagemap.h > +++ b/include/linux/pagemap.h > @@ -630,6 +630,23 @@ static inline int add_to_page_cache(struct page *page, > return error; > } > > +/* > + * Readahead is of a block of consecutive pages. > + */ > +struct readahead_control { > + struct file *file; > + struct address_space *mapping; > +/* private: use the readahead_* accessors instead */ > + pgoff_t _start; > + unsigned int _nr_pages; > +}; > + > +/* The number of pages in this readahead block */ > +static inline unsigned int readahead_count(struct readahead_control *rac) > +{ > + return rac->_nr_pages; > +} > + > static inline unsigned long dir_pages(struct inode *inode) > { > return (unsigned long)(inode->i_size + PAGE_SIZE - 1) >> > diff --git a/mm/readahead.c b/mm/readahead.c > index 12d13b7792da..15329309231f 100644 > --- a/mm/readahead.c > +++ b/mm/readahead.c > @@ -113,26 +113,29 @@ int read_cache_pages(struct address_space *mapping, struct list_head *pages, > > EXPORT_SYMBOL(read_cache_pages); > > -static void read_pages(struct address_space *mapping, struct file *filp, > - struct list_head *pages, unsigned int nr_pages, gfp_t gfp) > +static void read_pages(struct readahead_control *rac, struct list_head *pages, > + gfp_t gfp) > { > + const struct address_space_operations *aops = rac->mapping->a_ops; > struct blk_plug plug; > unsigned page_idx; Splitting out the aops rather than the mapping here just looks weird, especially as you need the mapping later in the function. Using aops doesn't even reduce the code side.... > > blk_start_plug(&plug); > > - if (mapping->a_ops->readpages) { > - mapping->a_ops->readpages(filp, mapping, pages, nr_pages); > + if (aops->readpages) { > + aops->readpages(rac->file, rac->mapping, pages, > + readahead_count(rac)); > /* Clean up the remaining pages */ > put_pages_list(pages); > goto out; > } > > - for (page_idx = 0; page_idx < nr_pages; page_idx++) { > + for (page_idx = 0; page_idx < readahead_count(rac); page_idx++) { > struct page *page = lru_to_page(pages); > list_del(&page->lru); > - if (!add_to_page_cache_lru(page, mapping, page->index, gfp)) > - mapping->a_ops->readpage(filp, page); > + if (!add_to_page_cache_lru(page, rac->mapping, page->index, > + gfp)) > + aops->readpage(rac->file, page); ... it just makes this less readable by splitting the if() over two lines... > put_page(page); > } > > @@ -155,9 +158,13 @@ void __do_page_cache_readahead(struct address_space *mapping, > unsigned long end_index; /* The last page we want to read */ > LIST_HEAD(page_pool); > int page_idx; > - unsigned int nr_pages = 0; > loff_t isize = i_size_read(inode); > gfp_t gfp_mask = readahead_gfp_mask(mapping); > + struct readahead_control rac = { > + .mapping = mapping, > + .file = filp, > + ._nr_pages = 0, > + }; No need to initialise _nr_pages to zero, leaving it out will do the same thing. > > if (isize == 0) > return; > @@ -180,10 +187,9 @@ void __do_page_cache_readahead(struct address_space *mapping, > * contiguous pages before continuing with the next > * batch. > */ > - if (nr_pages) > - read_pages(mapping, filp, &page_pool, nr_pages, > - gfp_mask); > - nr_pages = 0; > + if (readahead_count(&rac)) > + read_pages(&rac, &page_pool, gfp_mask); > + rac._nr_pages = 0; Hmmm. Wondering ig it make sense to move the gfp_mask to the readahead control structure - if we have to pass the gfp_mask down all the way along side the rac, then I think it makes sense to do that... Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx