On Thu, 24 Dec 2015 14:51:23 +0300 "Kirill A. Shutemov" <kirill.shutemov@xxxxxxxxxxxxxxx> wrote: > During freeze_page(), we remove the page from rmap. It munlocks the page > if it was mlocked. clear_page_mlock() uses of lru cache, which temporary > pins page. > > Let's drain the lru cache before checking page's count vs. mapcount. > The change makes mlocked page split on first attempt, if it was not > pinned by somebody else. > > Signed-off-by: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx> > --- > mm/huge_memory.c | 3 +++ > 1 file changed, 3 insertions(+) > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > index 1a988d9b86ef..4c1c292b7ddd 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -3417,6 +3417,9 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) > freeze_page(anon_vma, head); > VM_BUG_ON_PAGE(compound_mapcount(head), head); > > + /* Make sure the page is not on per-CPU pagevec as it takes pin */ > + lru_add_drain(); > + > /* Prevent deferred_split_scan() touching ->_count */ > spin_lock(&split_queue_lock); > count = page_count(head); Fair enough. mlocked pages are rare and lru_add_drain() isn't free. We could easily and cheaply make page_remove_rmap() return "bool was_mlocked" (or, better, "bool might_be_in_lru_cache") to skip this overhead. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>