Re: [PATCH v3] mm: fix race between MADV_FREE reclaim and blkdev direct IO read

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Feb 4, 2022 at 2:57 AM Yu Zhao <yuzhao@xxxxxxxxxx> wrote:
>
> On Thu, Feb 3, 2022 at 3:17 PM Mauricio Faria de Oliveira
> <mfo@xxxxxxxxxxxxx> wrote:
> >
> > On Wed, Feb 2, 2022 at 6:53 PM Yu Zhao <yuzhao@xxxxxxxxxx> wrote:
[...]
> > > Got it. IIRC, get_user_pages() doesn't imply a write barrier. If so,
> > > there should be a smp_wmb() on the other side:
> >
> > If I understand it correctly, it actually implies a full memory
> > barrier, doesn't it?
> >
> > Because... gup_pte_range() (fast path) calls try_grab_compound_head(),
> > which eventually calls* atomic_add_unless(), an atomic conditional RMW
> > operation with return value, thus fully ordered on success (atomic_t.rst);
> > (on failure gup_pte_range() falls back to the slow path, below.)
> >
> > And follow_page_pte() (slow path) calls try_grab_page(), which also calls
> > into try_grab_compound_head(), as the above.
> >
> > (* on CONFIG_TINY_RCU, it calls just atomic_add(), which isn't ordered,
> > but that option is targeted for UP/!SMP, thus not a problem for this race.)
> >
> > Looking at the implementation of arch_atomic_fetch_add_unless() on
> > more relaxed/weakly ordered archs (arm, powerpc, if I got that right),
> > there are barriers like 'smp_mb()' and 'sync' instruction if 'old != unless',
> > so that seems to be OK.
> >
> > And the set_page_dirty() calls occur after get_user_pages() / that point.
> >
> > Does that make sense?
>
> Yes, it does, thanks. I was thinking along the lines of whether there
> is an actual contract. [...]

Ok, got you.

> [...] The reason get_user_pages() currently works as
> a full barrier is not intentional but a side effect of this recent
> cleanup patch:
> commit 54d516b1d6 ("mm/gup: small refactoring: simplify try_grab_page()")
> But I agree your fix works as is.

Thanks for bringing it up!

That commit and its revert [1] (that John mentioned in his reply)
change only try_grab_page() / not try_grab_compound_head(),
thus should affect only the slow path / not the fast path.

So, with either change or revert, the slow path should still be okay,
as it takes the page table lock, and try_to_unmap_one() too, thus
they shouldn't race. And the spinlock barriers get values through.

Thanks,

[1] commit c36c04c2e132 ("Revert "mm/gup: small refactoring: simplify
try_grab_page()"")

-- 
Mauricio Faria de Oliveira



[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux