On 12/29/2015 09:57 PM, Kirill A. Shutemov wrote:
On Mon, Dec 28, 2015 at 03:30:26PM -0800, Andrew Morton wrote:
Fair enough.
mlocked pages are rare and lru_add_drain() isn't free. We could easily
and cheaply make page_remove_rmap() return "bool was_mlocked" (or,
better, "bool might_be_in_lru_cache") to skip this overhead.
Propagating it back is painful. What about this instead:
Looks good.
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index ecb4ed1a821a..edfa53eda9ca 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3385,6 +3385,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
struct page *head = compound_head(page);
struct anon_vma *anon_vma;
int count, mapcount, ret;
+ bool mlocked;
VM_BUG_ON_PAGE(is_huge_zero_page(page), page);
VM_BUG_ON_PAGE(!PageAnon(page), page);
@@ -3415,11 +3416,13 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
goto out_unlock;
}
+ mlocked = PageMlocked(page);
freeze_page(anon_vma, head);
VM_BUG_ON_PAGE(compound_mapcount(head), head);
/* Make sure the page is not on per-CPU pagevec as it takes pin */
- lru_add_drain();
+ if (mlocked)
+ lru_add_drain();
/* Prevent deferred_split_scan() touching ->_count */
spin_lock(&split_queue_lock);
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>