On Tue, Jun 07, 2016 at 01:43:29PM -0700, Tim Chen wrote: > On Tue, 2016-06-07 at 17:21 +0900, Minchan Kim wrote: > > On Wed, Jun 01, 2016 at 11:23:53AM -0700, Tim Chen wrote: > > > > > > On Wed, 2016-06-01 at 16:12 +0900, Minchan Kim wrote: > > > > > > > > > > > > Hi Tim, > > > > > > > > To me, this reorganization is too limited and not good for me, > > > > frankly speaking. It works for only your goal which allocate batch > > > > swap slot, I guess. :) > > > > > > > > My goal is to make them work with batch page_check_references, > > > > batch try_to_unmap and batch __remove_mapping where we can avoid frequent > > > > mapping->lock(e.g., anon_vma or i_mmap_lock with hoping such batch locking > > > > help system performance) if batch pages has same inode or anon. > > > This is also my goal to group pages that are either under the same > > > mapping or are anonymous pages together so we can reduce the i_mmap_lock > > > acquisition. One logic that's yet to be implemented in your patch > > > is the grouping of similar pages together so we only need one i_mmap_lock > > > acquisition. Doing this efficiently is non-trivial. > > Hmm, my assumption is based on same inode pages are likely to order > > in LRU so no need to group them. If successive page in page_list comes > > from different inode, we can drop the lock and get new lock from new > > inode. That sounds strange? > > > > Sounds reasonable. But your process function passed to spl_batch_pages may > need to be modified to know if the radix tree lock or swap info lock > has already been held, as it deals with only 1 page. It may be > tricky as the lock may get acquired and dropped more than once in process > function. > > Are you planning to update the patch with lock batching? Hi Tim, Okay, I will give it a shot. Thanks. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href