On Fri, Sep 15, 2023 at 7:44 PM Hugh Dickins <hughd@xxxxxxxxxx> wrote: > > On Fri, 15 Sep 2023, Suren Baghdasaryan wrote: > > On Fri, Sep 15, 2023 at 9:09 AM Suren Baghdasaryan <surenb@xxxxxxxxxx> wrote: > > > > > > Thanks for the feedback, Hugh! > > > Yeah, this positive err handling is kinda weird. If this behavior (do > > > as much as possible even if we fail eventually) is specific to mbind() > > > then we could keep walk_page_range() as is and lock the VMAs inside > > > the loop that calls mbind_range() with a condition that ret is > > > positive. That would be the simplest solution IMHO. But if we expect > > > walk_page_range() to always apply requested page_walk_lock policy to > > > all VMAs even if some mm_walk_ops returns a positive error somewhere > > > in the middle of the walk then my fix would work for that. So, to me > > > the important question is how we want walk_page_range() to behave in > > > these conditions. I think we should answer that first and document > > > that. Then the fix will be easy. > > > > I looked at all the cases where we perform page walk while locking > > VMAs and mbind() seems to be the only one that would require > > walk_page_range() to lock all VMAs even for a failed walk. > > Yes, I can well believe that. > > > So, I suggest this fix instead and I can also document that if > > walk_page_range() fails it might not apply page_walk_lock policy to > > the VMAs. > > > > diff --git a/mm/mempolicy.c b/mm/mempolicy.c > > index 42b5567e3773..cbc584e9b6ca 100644 > > --- a/mm/mempolicy.c > > +++ b/mm/mempolicy.c > > @@ -1342,6 +1342,9 @@ static long do_mbind(unsigned long start, > > unsigned long len, > > vma_iter_init(&vmi, mm, start); > > prev = vma_prev(&vmi); > > for_each_vma_range(vmi, vma, end) { > > + /* If queue_pages_range failed then not all VMAs > > might be locked */ > > + if (ret) > > + vma_start_write(vma); > > err = mbind_range(&vmi, vma, &prev, start, end, new); > > if (err) > > break; > > > > If this looks good I'll post the patch. Matthew, Hugh, anyone else? > > Yes, I do prefer this, to adding those pos ret mods into the generic > pagewalk. The "if (ret)" above being just a minor optimization, that > I would probably not have bothered with (does it even save any atomics?) > - but I guess it helps as documentation. > > I think it's quite likely that mbind() will be changed sooner or later > not to need this; but it's much the best to fix this vma locking issue > urgently as above, without depending on any mbind() behavioral discussions. I posted this patch at https://lore.kernel.org/all/20230918211608.3580629-1-surenb@xxxxxxxxxx/ to fix the immediate problem. Thanks! > > Thanks, > Hugh