On Wed, Oct 30, 2019 at 09:52:39AM -0700, Minchan Kim wrote: > @@ -1468,7 +1468,7 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk, > drain_allow = false; > } > > - if (!isolate_lru_page(head)) { > + if (!isolate_lru_page(head, false)) { > list_add_tail(&head->lru, &cma_page_list); > mod_node_page_state(page_pgdat(head), > NR_ISOLATED_ANON + It's not clear what that argument means at the callsite, and every caller needs to pass it to support one niche usecase. Let's not do that. I think there are better options. Personally, I think it's a good idea to keep the sanity check in shrink_page_list() because the mlock LRU handling is quite tricky. madvise() is really the odd one out here because it isolates LRU pages through page tables and then sends them through the regular reclaim path, so IMO it should be the madvise proper that handles the exceptional situation. Why not just this? Maybe with a comment that points out that we're coming from the page tables instead of a specific LRU list, and so need to filter out the unevictable lru pages from our end. diff --git a/mm/madvise.c b/mm/madvise.c index 99dd06fecfa9..63e130800570 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -363,8 +363,12 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, ClearPageReferenced(page); test_and_clear_page_young(page); if (pageout) { - if (!isolate_lru_page(page)) - list_add(&page->lru, &page_list); + if (!isolate_lru_page(page)) { + if (PageUnevictable(page)) + putback_lru_page(page); + else + list_add(&page->lru, &page_list); + } } else deactivate_page(page); huge_unlock: @@ -441,8 +445,12 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, ClearPageReferenced(page); test_and_clear_page_young(page); if (pageout) { - if (!isolate_lru_page(page)) - list_add(&page->lru, &page_list); + if (!isolate_lru_page(page)) { + if (PageUnevictable(page)) + putback_lru_page(page); + else + list_add(&page->lru, &page_list); + } } else deactivate_page(page); }