On Mon, Sep 12, 2022 at 12:45:59PM -0700, Andrew Morton wrote: > On Mon, 12 Sep 2022 00:55:08 -0600 Yu Zhao <yuzhao@xxxxxxxxxx> wrote: > > > > > The following should work properly. Please take a look. Thanks! > > > > --- > > mm/vmscan.c | 12 +++--------- > > 1 file changed, 3 insertions(+), 9 deletions(-) > > > > diff --git a/mm/vmscan.c b/mm/vmscan.c > > index 11a86d47e85e..b22d3efe3031 100644 > > --- a/mm/vmscan.c > > +++ b/mm/vmscan.c > > @@ -3776,23 +3776,17 @@ static bool get_next_vma(unsigned long mask, unsigned long size, struct mm_walk > > { > > unsigned long start = round_up(*vm_end, size); > > unsigned long end = (start | ~mask) + 1; > > + VMA_ITERATOR(vmi, args->mm, start); > > > > VM_WARN_ON_ONCE(mask & size); > > VM_WARN_ON_ONCE((start & mask) != (*vm_start & mask)); > > > > - while (args->vma) { > > - if (start >= args->vma->vm_end) { > > - args->vma = args->vma->vm_next; > > - continue; > > - } > > - > > + for_each_vma(vmi, args->vma) { > > if (end && end <= args->vma->vm_start) > > return false; > > > > - if (should_skip_vma(args->vma->vm_start, args->vma->vm_end, args)) { > > - args->vma = args->vma->vm_next; > > + if (should_skip_vma(args->vma->vm_start, args->vma->vm_end, args)) > > continue; > > - } > > > > *vm_start = max(start, args->vma->vm_start); > > *vm_end = min(end - 1, args->vma->vm_end - 1) + 1; > > What does this apply to? The above replaces the original patch in mm-unstable. > It's almost what is in mm-unstable/linux-next > at present? Yes, almost. > static bool get_next_vma(unsigned long mask, unsigned long size, struct mm_walk *args, > unsigned long *vm_start, unsigned long *vm_end) > { > unsigned long start = round_up(*vm_end, size); > unsigned long end = (start | ~mask) + 1; > VMA_ITERATOR(vmi, args->mm, start); > > VM_WARN_ON_ONCE(mask & size); > VM_WARN_ON_ONCE((start & mask) != (*vm_start & mask)); > > for_each_vma_range(vmi, args->vma, end) { > if (should_skip_vma(args->vma->vm_start, args->vma->vm_end, args)) > continue; > > *vm_start = max(start, args->vma->vm_start); > *vm_end = min(end - 1, args->vma->vm_end - 1) + 1; > > return true; > } > > return false; > } > > for_each_vma_range versus for_each_vma. The diff between the original patch and this one, in case you prefer to fix it atop rather than amend. diff --git a/mm/vmscan.c b/mm/vmscan.c index a7c5d15c1618..cadcc3290918 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -3776,7 +3776,10 @@ static bool get_next_vma(unsigned long mask, unsigned long size, struct mm_walk VM_WARN_ON_ONCE(mask & size); VM_WARN_ON_ONCE((start & mask) != (*vm_start & mask)); - for_each_vma_range(vmi, args->vma, end) { + for_each_vma(vmi, args->vma) { + if (end && end <= args->vma->vm_start) + return false; + if (should_skip_vma(args->vma->vm_start, args->vma->vm_end, args)) continue;