Re: [PATCH v3] mm: fix race between MADV_FREE reclaim and blkdev direct IO read

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Feb 2, 2022 at 6:53 PM Yu Zhao <yuzhao@xxxxxxxxxx> wrote:
>
> On Wed, Feb 02, 2022 at 06:27:47PM -0300, Mauricio Faria de Oliveira wrote:
> > On Wed, Feb 2, 2022 at 4:56 PM Yu Zhao <yuzhao@xxxxxxxxxx> wrote:
> > >
> > > On Mon, Jan 31, 2022 at 08:02:55PM -0300, Mauricio Faria de Oliveira wrote:
> > > > Problem:
> > > > =======
> > >
> > > Thanks for the update. A couple of quick questions:
> > >
> > > > Userspace might read the zero-page instead of actual data from a
> > > > direct IO read on a block device if the buffers have been called
> > > > madvise(MADV_FREE) on earlier (this is discussed below) due to a
> > > > race between page reclaim on MADV_FREE and blkdev direct IO read.
> > >
> > > 1) would page migration be affected as well?
> >
> > Could you please elaborate on the potential problem you considered?
> >
> > I checked migrate_pages() -> try_to_migrate() holds the page lock,
> > thus shouldn't race with shrink_page_list() -> with try_to_unmap()
> > (where the issue with MADV_FREE is), but maybe I didn't get you
> > correctly.
>
> Could the race exist between DIO and migration? While DIO is writing
> to a page, could migration unmap it and copy the data from this page
> to a new page?
>

Thanks for clarifying. I started looking into this, but since it's unrelated
to MADV_FREE (which doesn't apply to page migration), I guess this
shouldn't block this patch, if at all possible.  Is that OK with you?


> > > > @@ -1599,7 +1599,30 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
> > > >
> > > >                       /* MADV_FREE page check */
> > > >                       if (!PageSwapBacked(page)) {
> > > > -                             if (!PageDirty(page)) {
> > > > +                             int ref_count, map_count;
> > > > +
> > > > +                             /*
> > > > +                              * Synchronize with gup_pte_range():
> > > > +                              * - clear PTE; barrier; read refcount
> > > > +                              * - inc refcount; barrier; read PTE
> > > > +                              */
> > > > +                             smp_mb();
> > > > +
> > > > +                             ref_count = page_count(page);
> > > > +                             map_count = page_mapcount(page);
> > > > +
> > > > +                             /*
> > > > +                              * Order reads for page refcount and dirty flag;
> > > > +                              * see __remove_mapping().
> > > > +                              */
> > > > +                             smp_rmb();
> > >
> > > 2) why does it need to order against __remove_mapping()? It seems to
> > >    me that here (called from the reclaim path) it can't race with
> > >    __remove_mapping() because both lock the page.
> >
> > I'll improve that comment in v4.  The ordering isn't against __remove_mapping(),
> > but actually because of an issue described in __remove_mapping()'s comments
> > (something else that doesn't hold the page lock, just has a page reference, that
> > may clear the page dirty flag then drop the reference; thus check ref,
> > then dirty).
>
> Got it. IIRC, get_user_pages() doesn't imply a write barrier. If so,
> there should be a smp_wmb() on the other side:

If I understand it correctly, it actually implies a full memory
barrier, doesn't it?

Because... gup_pte_range() (fast path) calls try_grab_compound_head(),
which eventually calls* atomic_add_unless(), an atomic conditional RMW
operation with return value, thus fully ordered on success (atomic_t.rst);
(on failure gup_pte_range() falls back to the slow path, below.)

And follow_page_pte() (slow path) calls try_grab_page(), which also calls
into try_grab_compound_head(), as the above.

(* on CONFIG_TINY_RCU, it calls just atomic_add(), which isn't ordered,
but that option is targeted for UP/!SMP, thus not a problem for this race.)

Looking at the implementation of arch_atomic_fetch_add_unless() on
more relaxed/weakly ordered archs (arm, powerpc, if I got that right),
there are barriers like 'smp_mb()' and 'sync' instruction if 'old != unless',
so that seems to be OK.

And the set_page_dirty() calls occur after get_user_pages() / that point.

Does that make sense?

Thanks!


>
>          * get_user_pages(&page);
>
>         smp_wmb()
>
>          * SetPageDirty(page);
>          * put_page(page);
>
> (__remove_mapping() doesn't need smp_[rw]mb() on either side because
> it relies on page refcnt freeze and retesting.)
>
> Thanks.



-- 
Mauricio Faria de Oliveira



[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux