Re: [PATCH v2] mm: fix race between MADV_FREE reclaim and blkdev direct IO read

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jan 10, 2022 at 11:48:13PM -0700, Yu Zhao wrote:
> On Wed, Jan 05, 2022 at 08:34:40PM -0300, Mauricio Faria de Oliveira wrote:
> > diff --git a/mm/rmap.c b/mm/rmap.c
> > index 163ac4e6bcee..8671de473c25 100644
> > --- a/mm/rmap.c
> > +++ b/mm/rmap.c
> > @@ -1570,7 +1570,20 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
> >  
> >  			/* MADV_FREE page check */
> >  			if (!PageSwapBacked(page)) {
> > -				if (!PageDirty(page)) {
> > +				int ref_count = page_ref_count(page);
> > +				int map_count = page_mapcount(page);
> > +
> > +				/*
> > +				 * The only page refs must be from the isolation
> > +				 * (checked by the caller shrink_page_list() too)
> > +				 * and one or more rmap's (dropped by discard:).
> > +				 *
> > +				 * Check the reference count before dirty flag
> > +				 * with memory barrier; see __remove_mapping().
> > +				 */
> > +				smp_rmb();
> > +				if ((ref_count - 1 == map_count) &&
> > +				    !PageDirty(page)) {
> >  					/* Invalidate as we cleared the pte */
> >  					mmu_notifier_invalidate_range(mm,
> >  						address, address + PAGE_SIZE);
> 
> Out of curiosity, how does it work with COW in terms of reordering?
> Specifically, it seems to me get_page() and page_dup_rmap() in
> copy_present_pte() can happen in any order, and if page_dup_rmap()
> is seen first, and direct io is holding a refcnt, this check can still
> pass?
> 

Hi Yu,

I think you're correct. I think we don't like memory barrier
there in page_dup_rmap. Then, how about make gup_fast is aware
of FOLL_TOUCH?

FOLL_TOUCH means it's going to write something so the page
should be dirty. Currently, get_user_pages works like that.
Howver, problem is get_user_pages_fast since it looks like
that lockless_pages_from_mm doesn't support FOLL_TOUCH.

So the idea is if param in internal_get_user_pages_fast
includes FOLL_TOUCH, gup_{pmd,pte}_range try to make the
page dirty under trylock_page(If the lock fails, it goes
slow path with __gup_longterm_unlocked and set_dirty_pages
for them).

This approach would solve other cases where map userspace
pages into kernel space and then write. Since the write
didn't go through with the process's page table, we will
lose the dirty bit in the page table of the process and
it turns out same problem. That's why I'd like to approach
this.

If it doesn't work, the other option to fix this specific
case is can't we make pages dirty in advance in DIO read-case?

When I look at DIO code, it's already doing in async case.
Could't we do the same thing for the other cases?
I guess the worst case we will see would be more page
writeback since the page becomes dirty unnecessary.




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux