Hey David, Thanks for taking time to review! On Tue, Feb 27, 2024 at 3:30 PM David Hildenbrand <david@xxxxxxxxxx> wrote: > > On 27.02.24 08:04, Lance Yang wrote: > > Previously, in folio_pte_batch(), only the upper boundary of the > > folio was checked using '>=' for comparison. This led to > > incorrect behavior when the next PFN exceeded the lower boundary > > of the folio, especially in corner cases where the next PFN might > > fall into a different folio. > > Which commit does this fix? > > The introducing commit (f8d937761d65c87e9987b88ea7beb7bddc333a0e) is > already in mm-stable, so we would need a Fixes: tag. Unless, Ryan's > changes introduced a problem. > > BUT > > I don't see what is broken. :) > > Can you please give an example/reproducer? For example1: PTE0 is present for large folio1. PTE1 is present for large folio1. PTE2 is present for large folio1. PTE3 is present for large folio1. folio_nr_pages(folio1) is 4. folio_nr_pages(folio2) is 4. pte = *start_ptep = PTE0; max_nr = folio_nr_pages(folio2); If folio_pfn(folio1) < folio_pfn(folio2), the return value of folio_pte_batch(folio2, start_ptep, pte, max_nr) will be 4(Actually it should be 0). For example2: PTE0 is present for large folio2. PTE1 is present for large folio1. PTE2 is present for large folio1. PTE3 is present for large folio1. folio_nr_pages(folio1) is 4. folio_nr_pages(folio2) is 4. pte = *start_ptep = PTE0; max_nr = folio_nr_pages(folio1); If max_nr=4, the return value of folio_pte_batch(folio1, start_ptep, pte, max_nr) will be 1(Actually it should be 0). folio_pte_batch() will soon be exported, and IMO, these corner cases may need to be handled. Thanks, Lance > > We know that the first PTE maps the folio. By incrementing the PFN using > pte_next_pfn/pte_advance_pfn, we cannot suddenly get a lower PFN. > > So how would pte_advance_pfn(folio_start_pfn + X) suddenly give us a PFN > lower than folio_start_pfn? > > Note that we are not really concerned about any kind of > pte_advance_pfn() overflow that could generate PFN=0. I convinces myself > that that that is something we don't have to worry about. > > > [I also thought about getting rid of the pte_pfn(pte) >= folio_end_pfn > and instead limiting end_ptep. But that requires more work before the > loop and feels more like a micro-optimization.] > > > > > Signed-off-by: Lance Yang <ioworker0@xxxxxxxxx> > > --- > > mm/memory.c | 7 +++++-- > > 1 file changed, 5 insertions(+), 2 deletions(-) > > > > diff --git a/mm/memory.c b/mm/memory.c > > index 642b4f2be523..e5291d1e8c37 100644 > > --- a/mm/memory.c > > +++ b/mm/memory.c > > @@ -986,12 +986,15 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr, > > pte_t *start_ptep, pte_t pte, int max_nr, fpb_t flags, > > bool *any_writable) > > { > > - unsigned long folio_end_pfn = folio_pfn(folio) + folio_nr_pages(folio); > > + unsigned long folio_start_pfn, folio_end_pfn; > > const pte_t *end_ptep = start_ptep + max_nr; > > pte_t expected_pte, *ptep; > > bool writable; > > int nr; > > > > + folio_start_pfn = folio_pfn(folio); > > + folio_end_pfn = folio_start_pfn + folio_nr_pages(folio); > > + > > if (any_writable) > > *any_writable = false; > > > > @@ -1015,7 +1018,7 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr, > > * corner cases the next PFN might fall into a different > > * folio. > > */ > > - if (pte_pfn(pte) >= folio_end_pfn) > > + if (pte_pfn(pte) >= folio_end_pfn || pte_pfn(pte) < folio_start_pfn) > > break; > > > > if (any_writable) > > -- > Cheers, > > David / dhildenb >