On 06/03/2013 10:01 PM, Minchan Kim wrote: >> > +static int __remove_mapping_batch(struct list_head *remove_list, >> > + struct list_head *ret_pages, >> > + struct list_head *free_pages) >> > +{ >> > + int nr_reclaimed = 0; >> > + struct address_space *mapping; >> > + struct page *page; >> > + LIST_HEAD(need_free_mapping); >> > + >> > + while (!list_empty(remove_list)) { ... >> > + if (!__remove_mapping(mapping, page)) { >> > + unlock_page(page); >> > + list_add(&page->lru, ret_pages); >> > + continue; >> > + } >> > + list_add(&page->lru, &need_free_mapping); ... > + spin_unlock_irq(&mapping->tree_lock); > + while (!list_empty(&need_free_mapping)) {... > + list_move(&page->list, free_pages); > + mapping_release_page(mapping, page); > + } > Why do we need new lru list instead of using @free_pages? I actually tried using @free_pages at first. The problem is that we need to call mapping_release_page() without the radix tree lock held so we can not do it in the first while() loop. 'free_pages' is a list created up in shrink_page_list(). There can be several calls to __remove_mapping_batch() for each call to shrink_page_list(). 'need_free_mapping' lets us temporarily differentiate the pages that we need to call mapping_release_page()/unlock_page() on versus the ones on 'free_pages' which have already had that done. We could theoretically delay _all_ of the release_mapping_page()/unlock_page() operations until the _entire_ shrink_page_list() operation is done, but doing this really helps with lock_page() latency. Does that make sense? -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>