Re: [PATCH] mm: Cleanup - Reorganize the shrink_page_list code into smaller functions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jun 01, 2016 at 11:23:53AM -0700, Tim Chen wrote:
> On Wed, 2016-06-01 at 16:12 +0900, Minchan Kim wrote:
> > 
> > Hi Tim,
> > 
> > To me, this reorganization is too limited and not good for me,
> > frankly speaking. It works for only your goal which allocate batch
> > swap slot, I guess. :)
> > 
> > My goal is to make them work with batch page_check_references,
> > batch try_to_unmap and batch __remove_mapping where we can avoid frequent
> > mapping->lock(e.g., anon_vma or i_mmap_lock with hoping such batch locking
> > help system performance) if batch pages has same inode or anon.
> 
> This is also my goal to group pages that are either under the same
> mapping or are anonymous pages together so we can reduce the i_mmap_lock
> acquisition.  One logic that's yet to be implemented in your patch
> is the grouping of similar pages together so we only need one i_mmap_lock
> acquisition.  Doing this efficiently is non-trivial.  

Hmm, my assumption is based on same inode pages are likely to order
in LRU so no need to group them. If successive page in page_list comes
from different inode, we can drop the lock and get new lock from new
inode. That sounds strange?

> 
> I punted the problem somewhat in my patch and elected to defer the processing
> of the anonymous pages at the end so they are naturally grouped without
> having to traverse the page_list more than once.  So I'm batching the
> anonymous pages but the file mapped pages were not grouped.
> 
> In your implementation, you may need to traverse the page_list in two pass, where the
> first one is to categorize the pages and grouping them and the second one
> is the actual processing.  Then the lock batching can be implemented
> for the pages.  Otherwise the locking is still done page by page in
> your patch, and can only be batched if the next page on page_list happens
> to have the same mapping.  Your idea of using a spl_batch_pages is pretty

Yes. as I said above, I expect pages in LRU would be likely to order per
inode normally. If it's not, yeb, we need grouping but such overhead would
mitigate the benefit of lock batch as SWAP_CLUSTER_MAX get bigger.

> neat.  It may need some enhancement so it is known whether some locks
> are already held for lock batching purpose.
> 
> 
> Thanks.
> 
> Tim

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]