Re: [PATCH 4/4] thp: increase split_huge_page() success rate

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Dec 28, 2015 at 03:30:26PM -0800, Andrew Morton wrote:
> On Thu, 24 Dec 2015 14:51:23 +0300 "Kirill A. Shutemov" <kirill.shutemov@xxxxxxxxxxxxxxx> wrote:
> 
> > During freeze_page(), we remove the page from rmap. It munlocks the page
> > if it was mlocked. clear_page_mlock() uses of lru cache, which temporary
> > pins page.
> > 
> > Let's drain the lru cache before checking page's count vs. mapcount.
> > The change makes mlocked page split on first attempt, if it was not
> > pinned by somebody else.
> > 
> > Signed-off-by: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx>
> > ---
> >  mm/huge_memory.c | 3 +++
> >  1 file changed, 3 insertions(+)
> > 
> > diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> > index 1a988d9b86ef..4c1c292b7ddd 100644
> > --- a/mm/huge_memory.c
> > +++ b/mm/huge_memory.c
> > @@ -3417,6 +3417,9 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
> >  	freeze_page(anon_vma, head);
> >  	VM_BUG_ON_PAGE(compound_mapcount(head), head);
> >  
> > +	/* Make sure the page is not on per-CPU pagevec as it takes pin */
> > +	lru_add_drain();
> > +
> >  	/* Prevent deferred_split_scan() touching ->_count */
> >  	spin_lock(&split_queue_lock);
> >  	count = page_count(head);
> 
> Fair enough.
> 
> mlocked pages are rare and lru_add_drain() isn't free.  We could easily
> and cheaply make page_remove_rmap() return "bool was_mlocked" (or,
> better, "bool might_be_in_lru_cache") to skip this overhead.

Propagating it back is painful. What about this instead:

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index ecb4ed1a821a..edfa53eda9ca 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3385,6 +3385,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
 	struct page *head = compound_head(page);
 	struct anon_vma *anon_vma;
 	int count, mapcount, ret;
+	bool mlocked;
 
 	VM_BUG_ON_PAGE(is_huge_zero_page(page), page);
 	VM_BUG_ON_PAGE(!PageAnon(page), page);
@@ -3415,11 +3416,13 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
 		goto out_unlock;
 	}
 
+	mlocked = PageMlocked(page);
 	freeze_page(anon_vma, head);
 	VM_BUG_ON_PAGE(compound_mapcount(head), head);
 
 	/* Make sure the page is not on per-CPU pagevec as it takes pin */
-	lru_add_drain();
+	if (mlocked)
+		lru_add_drain();
 
 	/* Prevent deferred_split_scan() touching ->_count */
 	spin_lock(&split_queue_lock);
-- 
 Kirill A. Shutemov

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]