summarize all information again at bottom//reply: reply: [PATCH] mm: fix a race scenario in folio_isolate_lru

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>On Mon, Mar 18, 2024 at 2:15 PM Zhaoyang Huang
><huangzhaoyang@xxxxxxxxx> wrote:
>>
>> On Mon, Mar 18, 2024 at 11:28 AM Matthew Wilcox <willy@xxxxxxxxxxxxx>
>wrote:
>> >
>> > On Mon, Mar 18, 2024 at 01:37:04AM +0000, 黄朝阳 (Zhaoyang Huang)
>wrote:
>> > > >On Sun, Mar 17, 2024 at 12:07:40PM +0800, Zhaoyang Huang wrote:
>> > > >> Could it be this scenario, where folio comes from pte(thread
>> > > >> 0), local fbatch(thread 1) and page cache(thread 2)
>> > > >> concurrently and proceed intermixed without lock's protection?
>> > > >> Actually, IMO, thread 1 also could see the folio with refcnt==1
>> > > >> since it doesn't care if the page is on the page cache or not.
>> > > >>
>> > > >> madivise_cold_and_pageout does no explicit folio_get thing
>> > > >> since the folio comes from pte which implies it has one refcnt
>> > > >> from pagecache
>> > > >
>> > > >Mmm, no.  It's implicit, but madvise_cold_or_pageout_pte_range()
>> > > >does guarantee that the folio has at least one refcount.
>> > > >
>> > > >Since we get the folio from vm_normal_folio(vma, addr, ptent); we
>> > > >know that there is at least one mapcount on the folio.  refcount is
>always >= mapcount.
>> > > >Since we hold pte_offset_map_lock(), we know that mapcount (and
>> > > >therefore
>> > > >refcount) cannot be decremented until we call pte_unmap_unlock(),
>> > > >which we don't do until we have called folio_isolate_lru().
>> > > >
>> > > >Good try though, took me a few minutes of looking at it to
>> > > >convince myself that it was safe.
>> > > >
>> > > >Something to bear in mind is that if the race you outline is
>> > > >real, failing to hold a refcount on the folio leaves the caller
>> > > >susceptible to the VM_BUG_ON_FOLIO(!folio_ref_count(folio),
>> > > >folio); if the other thread calls folio_put().
>> > > Resend the chart via outlook.
>> > > I think the problem rely on an special timing which is rare, I would like to
>list them below in timing sequence.
>> > >
>> > > 1. thread 0 calls folio_isolate_lru with refcnt == 1
>> >
>> > (i assume you mean refcnt == 2 here, otherwise none of this makes
>> > sense)
>> >
>> > > 2. thread 1 calls release_pages with refcnt == 2.(IMO, it could be
>> > > 1 as release_pages doesn't care if the folio is used by page cache
>> > > or fs) 3. thread 2 decrease refcnt to 1 by calling
>> > > filemap_free_folio.(as I mentioned in 2, thread 2 is not mandatary
>> > > here) 4. thread 1 calls folio_put_testzero and pass.(lruvec->lock
>> > > has not been take here)
>> >
>> > But there's already a bug here.
>> >
>> > Rearrange the order of this:
>> >
>> > 2. thread 1 calls release_pages with refcount == 2 (decreasing
>> > refcount to 1) 3. thread 2 decrease refcount to 0 by calling
>> > filemap_free_folio 1. thread 0 calls folio_isolate_lru() and hits the BUG().
>> >
>> > > 5. thread 0 clear folio's PG_lru by calling folio_test_clear_lru. The
>folio_get behind has no meaning there.
>> > > 6. thread 1 failed in folio_test_lru and leave the folio on the LRU.
>> > > 7. thread 1 add folio to pages_to_free wrongly which could break
>> > > the LRU's->list and will have next folio experience
>> > > list_del_invalid
>> > >
>> > > #thread 0(madivise_cold_and_pageout)
>#1(lru_add_drain->fbatch_release_pages)
>#2(read_pages->filemap_remove_folios)
>> > > refcnt == 1(represent page cache)             refcnt==2(another one
>represent LRU)          folio comes from page cache
>> >
>> > This is still illegible.  Try it this way:
>> >
>> > Thread 0        Thread 1        Thread 2
>> > madvise_cold_or_pageout_pte_range
>> >                 lru_add_drain
>> >                 fbatch_release_pages
>> >                                 read_pages
>> >                                 filemap_remove_folio
>> Thread 0        Thread 1        Thread 2
>> madvise_cold_or_pageout_pte_range
>>                 truncate_inode_pages_range
>>                 fbatch_release_pages
>>                                 truncate_inode_pages_range
>>                                 filemap_remove_folio Sorry for the
>> confusion. Rearrange the timing chart like above according to the real
>> panic's stacktrace. Thread 1&2 are all from
>> truncate_inode_pages_range(I think thread2(read_pages) is not
>> mandatory here as thread 0&1 could rely on the same refcnt==1).
>> >
>> > Some accuracy in your report would also be appreciated.  There's no
>> > function called madivise_cold_and_pageout, nor is there a function
>> > called filemap_remove_folios().  It's a little detail, but it's
>> > annoying for me to try to find which function you're actually
>> > referring to.  I have to guess, and it puts me in a bad mood.
>> >
>> > At any rate, these three functions cannot do what you're proposing.
>> > In read_page(), when we call filemap_remove_folio(), the folio in
>> > question will not have the uptodate flag set, so can never have been
>> > put in the page tables, so cannot be found by madvise().
>> >
>> > Also, as I said in my earlier email,
>> > madvise_cold_or_pageout_pte_range()
>> > does guarantee that the refcount on the folio is held and can never
>> > decrease to zero while folio_isolate_lru() is running.  So that's
>> > two ways this scenario cannot happen.
>> The madivse_xxx comes from my presumption which has any proof.
>> Whereas, It looks like truncate_inode_pages_range just cares about
>> page cache refcnt by folio_put_testzero without noticing any task's VM
>> stuff. Furthermore, I notice that move_folios_to_lru is safe as it
>> runs with holding lruvec->lock.
>> >
>BTW, I think we need to protect all
>folio_test_clear_lru/folio_test_lru by moving them into lruvec->lock in such as
>__page_cache_release and folio_activate functions.
>Otherwise, there is always a race window between judging PG_lru and
>following actions.

Summarize all information below to make it more clear(remove thread2 which is not mandatory and make the scenario complex)

#thread 0(madivise_cold_and_pageout)        #thread1(truncate_inode_pages_range) 
pte_offset_map_lock						 takes NO lock
										 truncate_inode_folio(refcnt == 2)
										 <decrease the refcnt of page cache>
folio_isolate_lru(refcnt == 1)	                 
										 release_pages(refcnt == 1)
folio_test_clear_lru 
<remove folio's PG_lru>
										folio_put_testzero == true
folio_get(refer to isolation)
										folio_test_lru == false
									  	<No lruvec_del_folio>
										list_add(folio->lru, pages_to_free)
										****current folio will break LRU's integrity since it has not been deleted****

0. Folio's refcnt decrease from 2 to 1 by filemap_remove_folio
1. thread 0 calls folio_isolate_lru with refcnt == 1. Folio comes from vm's pte
2. thread 1 calls release_pages with refcnt == 1. Folio comes from address_space
(refcnt == 1 make sense for both of folio_isolate_lru and release_pages)
3. thread0 clear folio's PG_lru by folio_test_clear_lru
4. thread1 decrease folio's refcnt from 1 to 0 and get permission to proceed
5. thread1 failed in folio_test_lru and do no list_del(folio)
6. thread1 add folio to pages_to_free wrongly which break the LRU's->list 
7. next folio after current one within thread1 experiences list_del_invalid when calling lruvec_del_folio




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux