Re: [PATCH -V4 -mm] mm, swap: Fix race between swapoff and some swap operations

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx> writes:

> On Wed, Jan 03, 2018 at 08:42:15AM +0800, Huang, Ying wrote:
>> Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx> writes:
>> 
>> > On Tue, Jan 02, 2018 at 12:29:55PM +0100, Jan Kara wrote:
>> >> On Tue 02-01-18 10:21:03, Mel Gorman wrote:
>> >> > On Sat, Dec 23, 2017 at 10:36:53AM +0900, Minchan Kim wrote:
>> >> > > > code path.  It appears that similar situation is possible for them too.
>> >> > > > 
>> >> > > > The file cache pages will be delete from file cache address_space before
>> >> > > > address_space (embedded in inode) is freed.  But they will be deleted
>> >> > > > from LRU list only when its refcount dropped to zero, please take a look
>> >> > > > at put_page() and release_pages().  While address_space will be freed
>> >> > > > after putting reference to all file cache pages.  If someone holds a
>> >> > > > reference to a file cache page for quite long time, it is possible for a
>> >> > > > file cache page to be in LRU list after the inode/address_space is
>> >> > > > freed.
>> >> > > > 
>> >> > > > And I found inode/address_space is freed witch call_rcu().  I don't know
>> >> > > > whether this is related to page_mapping().
>> >> > > > 
>> >> > > > This is just my understanding.
>> >> > > 
>> >> > > Hmm, it smells like a bug of __isolate_lru_page.
>> >> > > 
>> >> > > Ccing Mel:
>> >> > > 
>> >> > > What locks protects address_space destroying when race happens between
>> >> > > inode trauncation and __isolate_lru_page?
>> >> > > 
>> >> > 
>> >> > I'm just back online and have a lot of catching up to do so this is a rushed
>> >> > answer and I didn't read the background of this. However the question is
>> >> > somewhat ambiguous and the scope is broad as I'm not sure which race you
>> >> > refer to. For file cache pages, I wouldnt' expect the address_space to be
>> >> > destroyed specifically as long as the inode exists which is the structure
>> >> > containing the address_space in this case. A page on the LRU being isolated
>> >> > in __isolate_lru_page will have an elevated reference count which will
>> >> > pin the inode until remove_mapping is called which holds the page lock
>> >> > while inode truncation looking at a page for truncation also only checks
>> >> > page_mapping under the page lock. Very broadly speaking, pages avoid being
>> >> > added back to an inode being freed by checking the I_FREEING state.
>> >> 
>> >> So I'm wondering what prevents the following:
>> >> 
>> >> CPU1						CPU2
>> >> 
>> >> truncate(inode)					__isolate_lru_page()
>> >>   ...
>> >>   truncate_inode_page(mapping, page);
>> >>     delete_from_page_cache(page)
>> >>       spin_lock_irqsave(&mapping->tree_lock, flags);
>> >>         __delete_from_page_cache(page, NULL)
>> >>           page_cache_tree_delete(..)
>> >>             ...					  mapping = page_mapping(page);
>> >>             page->mapping = NULL;
>> >>             ...
>> >>       spin_unlock_irqrestore(&mapping->tree_lock, flags);
>> >>       page_cache_free_page(mapping, page)
>> >>         put_page(page)
>> >>           if (put_page_testzero(page)) -> false
>> >> - inode now has no pages and can be freed including embedded address_space
>> >> 
>> >> 						  if (mapping && !mapping->a_ops->migratepage)
>> >> - we've dereferenced mapping which is potentially already free.
>> >> 
>> >
>> > Hmm, possible if unlikely.
>> >
>> > Before delete_from_page_cache, we called truncate_cleanup_page so the
>> > page is likely to be !PageDirty or PageWriteback which gets skipped by
>> > the only caller that checks the mappping in __isolate_lru_page. The race
>> > is tiny but it does exist. One way of closing it is to check the mapping
>> > under the page lock which will prevent races with truncation. The
>> > overhead is minimal as the calling context (compaction) is quite a heavy
>> > operation anyway.
>> >
>> 
>> I think another possible fix is to use call_rcu_sched() to free inode
>> (and address_space).  Because __isolate_lru_page() will be called with
>> LRU spinlock held and IRQ disabled, call_rcu_sched() will wait
>> LRU spin_unlock and IRQ enabled.
>> 
>
> Maybe, but in this particular case, I would prefer to go with something
> more conventional unless there is strong evidence that it's an improvement
> (which I doubt in this case given the cost of migration overall and the
> corner case of migrating a dirty page).

So you like page_lock() more than RCU?  Is there any problem of RCU?
The object to be protected isn't clear?

Another way to fix this with RCU is to replace
trylock_page()/unlock_page() with rcu_read_lock()/rcu_read_unlock() in
your fix.

JFYI, please keep your fix if you think that is more appropriate.

Best Regards,
Huang, Ying

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]
  Powered by Linux