[merged] mm-avoid-marking-swap-cached-page-as-lazyfree.patch removed from -mm tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The patch titled
     Subject: mm: avoid marking swap cached page as lazyfree
has been removed from the -mm tree.  Its filename was
     mm-avoid-marking-swap-cached-page-as-lazyfree.patch

This patch was dropped because it was merged into mainline or a subsystem tree

------------------------------------------------------
From: Shaohua Li <shli@xxxxxx>
Subject: mm: avoid marking swap cached page as lazyfree

MADV_FREE clears pte dirty bit and then marks the page lazyfree (clear
SwapBacked).  There is no lock to prevent the page is added to swap cache
between these two steps by page reclaim.  Page reclaim could add the page
to swap cache and unmap the page.  After page reclaim, the page is added
back to lru.  At that time, we probably start draining per-cpu pagevec and
mark the page lazyfree.  So the page could be in a state with SwapBacked
cleared and PG_swapcache set.  Next time there is a refault in the virtual
address, do_swap_page can find the page from swap cache but the page has
PageSwapCache false because SwapBacked isn't set, so do_swap_page will
bail out and do nothing.  The task will keep running into fault handler.

Fixes: 802a3a92ad7a ("mm: reclaim MADV_FREE pages")
Link: http://lkml.kernel.org/r/6537ef3814398c0073630b03f176263bc81f0902.1506446061.git.shli@xxxxxx
Signed-off-by: Shaohua Li <shli@xxxxxx>
Reported-by: Artem Savkov <asavkov@xxxxxxxxxx>
Tested-by: Artem Savkov <asavkov@xxxxxxxxxx>
Reviewed-by: Rik van Riel <riel@xxxxxxxxxx>
Acked-by: Johannes Weiner <hannes@xxxxxxxxxxx>
Acked-by: Michal Hocko <mhocko@xxxxxxxx>
Acked-by: Minchan Kim <minchan@xxxxxxxxxx>
Cc: Hillf Danton <hdanton@xxxxxxxx>
Cc: Hugh Dickins <hughd@xxxxxxxxxx>
Cc: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx>
Cc: <stable@xxxxxxxxxxxxxxx>	[4.12+]
Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
---

 mm/swap.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff -puN mm/swap.c~mm-avoid-marking-swap-cached-page-as-lazyfree mm/swap.c
--- a/mm/swap.c~mm-avoid-marking-swap-cached-page-as-lazyfree
+++ a/mm/swap.c
@@ -575,7 +575,7 @@ static void lru_lazyfree_fn(struct page
 			    void *arg)
 {
 	if (PageLRU(page) && PageAnon(page) && PageSwapBacked(page) &&
-	    !PageUnevictable(page)) {
+	    !PageSwapCache(page) && !PageUnevictable(page)) {
 		bool active = PageActive(page);
 
 		del_page_from_lru_list(page, lruvec,
@@ -665,7 +665,7 @@ void deactivate_file_page(struct page *p
 void mark_page_lazyfree(struct page *page)
 {
 	if (PageLRU(page) && PageAnon(page) && PageSwapBacked(page) &&
-	    !PageUnevictable(page)) {
+	    !PageSwapCache(page) && !PageUnevictable(page)) {
 		struct pagevec *pvec = &get_cpu_var(lru_lazyfree_pvecs);
 
 		get_page(page);
_

Patches currently in -mm which might be from shli@xxxxxx are





[Index of Archives]     [Linux Kernel]     [Kernel Development Newbies]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite Hiking]     [Linux Kernel]     [Linux SCSI]