Normally, file I/O for reclaiming is asynchronous so that when page writeback is completed, reclaimed page will be rotated into LRU tail for fast reclaiming in next turn. But it makes unnecessary CPU overhead and more iteration with higher priority of reclaim could reclaim too many pages than needed pages. This patch frees reclaimed pages by paging out instantly without rotating back them into LRU's tail when the I/O is completed so that we can get out of reclaim loop as soon as poosbile and avoid unnecessary CPU overhead for moving them. Signed-off-by: Minchan Kim <minchan@xxxxxxxxxx> --- mm/filemap.c | 6 +++--- mm/swap.c | 14 +++++++++++++- 2 files changed, 16 insertions(+), 4 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index 7905fe7..8e2017b 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -618,12 +618,12 @@ EXPORT_SYMBOL(unlock_page); */ void end_page_writeback(struct page *page) { - if (TestClearPageReclaim(page)) - rotate_reclaimable_page(page); - if (!test_clear_page_writeback(page)) BUG(); + if (TestClearPageReclaim(page)) + rotate_reclaimable_page(page); + smp_mb__after_clear_bit(); wake_up_page(page, PG_writeback); } diff --git a/mm/swap.c b/mm/swap.c index dfd7d71..87f21632 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -324,7 +324,19 @@ static void pagevec_move_tail_fn(struct page *page, struct lruvec *lruvec, int *pgmoved = arg; if (PageLRU(page) && !PageActive(page) && !PageUnevictable(page)) { - enum lru_list lru = page_lru_base_type(page); + enum lru_list lru; + + if (!trylock_page(page)) + goto move_tail; + + if (!remove_mapping(page_mapping(page), page, true)) { + unlock_page(page); + goto move_tail; + } + unlock_page(page); + return; +move_tail: + lru = page_lru_base_type(page); list_move_tail(&page->lru, &lruvec->lists[lru]); (*pgmoved)++; } -- 1.8.2.1 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>