Re: [PATCH v7 04/11] readahead: allocate folios with mapping_min_order in readahead

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jun 12, 2024 at 07:50:49PM +0100, Matthew Wilcox wrote:
> On Fri, Jun 07, 2024 at 02:58:55PM +0000, Pankaj Raghav (Samsung) wrote:
> > @@ -230,7 +247,9 @@ void page_cache_ra_unbounded(struct readahead_control *ractl,
> >  		struct folio *folio = xa_load(&mapping->i_pages, index + i);
> >  		int ret;
> >  
> > +
> 
> Spurious newline
Oops.
> 
> >  		if (folio && !xa_is_value(folio)) {
> > +			long nr_pages = folio_nr_pages(folio);
> 
> Hm, but we don't have a reference on this folio.  So this isn't safe.
> 

That is why I added a check for mapping after read_pages(). You are
right, we can make it better.

> > @@ -240,12 +259,24 @@ void page_cache_ra_unbounded(struct readahead_control *ractl,
> >  			 * not worth getting one just for that.
> >  			 */
> >  			read_pages(ractl);
> > -			ractl->_index += folio_nr_pages(folio);
> > +
> > +			/*
> > +			 * Move the ractl->_index by at least min_pages
> > +			 * if the folio got truncated to respect the
> > +			 * alignment constraint in the page cache.
> > +			 *
> > +			 */
> > +			if (mapping != folio->mapping)
> > +				nr_pages = min_nrpages;
> > +
> > +			VM_BUG_ON_FOLIO(nr_pages < min_nrpages, folio);
> > +			ractl->_index += nr_pages;
> 
> Why not just:
> 			ractl->_index += min_nrpages;

Then we will only move min_nrpages even if the folio we found had a
bigger order. Hannes patches (first patch) made sure we move the
ractl->index by folio_nr_pages instead of 1 and making this change will
defeat the purpose because without mapping order set, min_nrpages will
be 1.

What I could do is the follows:

diff --git a/mm/readahead.c b/mm/readahead.c
index 389cd802da63..92cf45cdb4d3 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -249,7 +249,7 @@ void page_cache_ra_unbounded(struct readahead_control *ractl,
 
 
                if (folio && !xa_is_value(folio)) {
-                       long nr_pages = folio_nr_pages(folio);
+                       long nr_pages;
                        /*
                         * Page already present?  Kick off the current batch
                         * of contiguous pages before continuing with the
@@ -266,10 +266,8 @@ void page_cache_ra_unbounded(struct readahead_control *ractl,
                         * alignment constraint in the page cache.
                         *
                         */
-                       if (mapping != folio->mapping)
-                               nr_pages = min_nrpages;
+                       nr_pages = max(folio_nr_pages(folio), (long)min_nrpages);
 
-                       VM_BUG_ON_FOLIO(nr_pages < min_nrpages, folio);
                        ractl->_index += nr_pages;
                        i = ractl->_index + ractl->_nr_pages - index;
                        continue;

Now we will still move respecting the min order constraint but if we had
a bigger folio and we do have a reference, then we move folio_nr_pages.
> 
> like you do below?
Below we add a folio of min_order, so if that fails for some reason, we
can unconditionally move min_nrpages.




[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux