Patch "mm/compaction: do page isolation first in compaction" has been added to the 5.10-stable tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is a note to let you know that I've just added the patch titled

    mm/compaction: do page isolation first in compaction

to the 5.10-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     mm-compaction-do-page-isolation-first-in-compaction.patch
and it can be found in the queue-5.10 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@xxxxxxxxxxxxxxx> know about it.



commit ef33d369381db1a4df5a9f474b014cc4e4664606
Author: Alex Shi <alexs@xxxxxxxxxx>
Date:   Tue Dec 15 12:34:20 2020 -0800

    mm/compaction: do page isolation first in compaction
    
    [ Upstream commit 9df41314390b81a541ca6e84c8340bad0959e4b5 ]
    
    Currently, compaction would get the lru_lock and then do page isolation
    which works fine with pgdat->lru_lock, since any page isoltion would
    compete for the lru_lock.  If we want to change to memcg lru_lock, we have
    to isolate the page before getting lru_lock, thus isoltion would block
    page's memcg change which relay on page isoltion too.  Then we could
    safely use per memcg lru_lock later.
    
    The new page isolation use previous introduced TestClearPageLRU() + pgdat
    lru locking which will be changed to memcg lru lock later.
    
    Hugh Dickins <hughd@xxxxxxxxxx> fixed following bugs in this patch's early
    version:
    
    Fix lots of crashes under compaction load: isolate_migratepages_block()
    must clean up appropriately when rejecting a page, setting PageLRU again
    if it had been cleared; and a put_page() after get_page_unless_zero()
    cannot safely be done while holding locked_lruvec - it may turn out to be
    the final put_page(), which will take an lruvec lock when PageLRU.
    
    And move __isolate_lru_page_prepare back after get_page_unless_zero to
    make trylock_page() safe: trylock_page() is not safe to use at this time:
    its setting PG_locked can race with the page being freed or allocated
    ("Bad page"), and can also erase flags being set by one of those "sole
    owners" of a freshly allocated page who use non-atomic __SetPageFlag().
    
    Link: https://lkml.kernel.org/r/1604566549-62481-16-git-send-email-alex.shi@xxxxxxxxxxxxxxxxx
    Suggested-by: Johannes Weiner <hannes@xxxxxxxxxxx>
    Signed-off-by: Alex Shi <alex.shi@xxxxxxxxxxxxxxxxx>
    Acked-by: Hugh Dickins <hughd@xxxxxxxxxx>
    Acked-by: Johannes Weiner <hannes@xxxxxxxxxxx>
    Acked-by: Vlastimil Babka <vbabka@xxxxxxx>
    Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx>
    Cc: Alexander Duyck <alexander.duyck@xxxxxxxxx>
    Cc: Andrea Arcangeli <aarcange@xxxxxxxxxx>
    Cc: Andrey Ryabinin <aryabinin@xxxxxxxxxxxxx>
    Cc: "Chen, Rong A" <rong.a.chen@xxxxxxxxx>
    Cc: Daniel Jordan <daniel.m.jordan@xxxxxxxxxx>
    Cc: "Huang, Ying" <ying.huang@xxxxxxxxx>
    Cc: Jann Horn <jannh@xxxxxxxxxx>
    Cc: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx>
    Cc: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx>
    Cc: Kirill A. Shutemov <kirill@xxxxxxxxxxxxx>
    Cc: Konstantin Khlebnikov <khlebnikov@xxxxxxxxxxxxxx>
    Cc: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx>
    Cc: Michal Hocko <mhocko@xxxxxxxxxx>
    Cc: Michal Hocko <mhocko@xxxxxxxx>
    Cc: Mika Penttilä <mika.penttila@xxxxxxxxxxxx>
    Cc: Minchan Kim <minchan@xxxxxxxxxx>
    Cc: Shakeel Butt <shakeelb@xxxxxxxxxx>
    Cc: Tejun Heo <tj@xxxxxxxxxx>
    Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
    Cc: Vladimir Davydov <vdavydov.dev@xxxxxxxxx>
    Cc: Wei Yang <richard.weiyang@xxxxxxxxx>
    Cc: Yang Shi <yang.shi@xxxxxxxxxxxxxxxxx>
    Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
    Signed-off-by: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>
    Stable-dep-of: 829ae0f81ce0 ("mm: migrate: fix THP's mapcount on isolation")
    Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx>

diff --git a/include/linux/swap.h b/include/linux/swap.h
index fbc6805358da..3577d3a6ec37 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -358,7 +358,7 @@ extern void lru_cache_add_inactive_or_unevictable(struct page *page,
 extern unsigned long zone_reclaimable_pages(struct zone *zone);
 extern unsigned long try_to_free_pages(struct zonelist *zonelist, int order,
 					gfp_t gfp_mask, nodemask_t *mask);
-extern int __isolate_lru_page(struct page *page, isolate_mode_t mode);
+extern int __isolate_lru_page_prepare(struct page *page, isolate_mode_t mode);
 extern unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *memcg,
 						  unsigned long nr_pages,
 						  gfp_t gfp_mask,
diff --git a/mm/compaction.c b/mm/compaction.c
index 8dfbe86bd74f..ba3e907f03b7 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -890,6 +890,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
 		if (!valid_page && IS_ALIGNED(low_pfn, pageblock_nr_pages)) {
 			if (!cc->ignore_skip_hint && get_pageblock_skip(page)) {
 				low_pfn = end_pfn;
+				page = NULL;
 				goto isolate_abort;
 			}
 			valid_page = page;
@@ -971,6 +972,21 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
 		if (!(cc->gfp_mask & __GFP_FS) && page_mapping(page))
 			goto isolate_fail;
 
+		/*
+		 * Be careful not to clear PageLRU until after we're
+		 * sure the page is not being freed elsewhere -- the
+		 * page release code relies on it.
+		 */
+		if (unlikely(!get_page_unless_zero(page)))
+			goto isolate_fail;
+
+		if (__isolate_lru_page_prepare(page, isolate_mode) != 0)
+			goto isolate_fail_put;
+
+		/* Try isolate the page */
+		if (!TestClearPageLRU(page))
+			goto isolate_fail_put;
+
 		/* If we already hold the lock, we can skip some rechecking */
 		if (!locked) {
 			locked = compact_lock_irqsave(&pgdat->lru_lock,
@@ -983,10 +999,6 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
 					goto isolate_abort;
 			}
 
-			/* Recheck PageLRU and PageCompound under lock */
-			if (!PageLRU(page))
-				goto isolate_fail;
-
 			/*
 			 * Page become compound since the non-locked check,
 			 * and it's on LRU. It can only be a THP so the order
@@ -994,16 +1006,13 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
 			 */
 			if (unlikely(PageCompound(page) && !cc->alloc_contig)) {
 				low_pfn += compound_nr(page) - 1;
-				goto isolate_fail;
+				SetPageLRU(page);
+				goto isolate_fail_put;
 			}
 		}
 
 		lruvec = mem_cgroup_page_lruvec(page, pgdat);
 
-		/* Try isolate the page */
-		if (__isolate_lru_page(page, isolate_mode) != 0)
-			goto isolate_fail;
-
 		/* The whole page is taken off the LRU; skip the tail pages. */
 		if (PageCompound(page))
 			low_pfn += compound_nr(page) - 1;
@@ -1032,6 +1041,15 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
 		}
 
 		continue;
+
+isolate_fail_put:
+		/* Avoid potential deadlock in freeing page under lru_lock */
+		if (locked) {
+			spin_unlock_irqrestore(&pgdat->lru_lock, flags);
+			locked = false;
+		}
+		put_page(page);
+
 isolate_fail:
 		if (!skip_on_failure)
 			continue;
@@ -1068,9 +1086,15 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
 	if (unlikely(low_pfn > end_pfn))
 		low_pfn = end_pfn;
 
+	page = NULL;
+
 isolate_abort:
 	if (locked)
 		spin_unlock_irqrestore(&pgdat->lru_lock, flags);
+	if (page) {
+		SetPageLRU(page);
+		put_page(page);
+	}
 
 	/*
 	 * Updated the cached scanner pfn once the pageblock has been scanned
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 8d62eedfc794..5ada402c8d95 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1545,7 +1545,7 @@ unsigned int reclaim_clean_pages_from_list(struct zone *zone,
  *
  * returns 0 on success, -ve errno on failure.
  */
-int __isolate_lru_page(struct page *page, isolate_mode_t mode)
+int __isolate_lru_page_prepare(struct page *page, isolate_mode_t mode)
 {
 	int ret = -EBUSY;
 
@@ -1597,22 +1597,9 @@ int __isolate_lru_page(struct page *page, isolate_mode_t mode)
 	if ((mode & ISOLATE_UNMAPPED) && page_mapped(page))
 		return ret;
 
-	if (likely(get_page_unless_zero(page))) {
-		/*
-		 * Be careful not to clear PageLRU until after we're
-		 * sure the page is not being freed elsewhere -- the
-		 * page release code relies on it.
-		 */
-		if (TestClearPageLRU(page))
-			ret = 0;
-		else
-			put_page(page);
-	}
-
-	return ret;
+	return 0;
 }
 
-
 /*
  * Update LRU sizes after isolating pages. The LRU size updates must
  * be complete before mem_cgroup_update_lru_size due to a sanity check.
@@ -1692,20 +1679,34 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
 		 * only when the page is being freed somewhere else.
 		 */
 		scan += nr_pages;
-		switch (__isolate_lru_page(page, mode)) {
+		switch (__isolate_lru_page_prepare(page, mode)) {
 		case 0:
+			/*
+			 * Be careful not to clear PageLRU until after we're
+			 * sure the page is not being freed elsewhere -- the
+			 * page release code relies on it.
+			 */
+			if (unlikely(!get_page_unless_zero(page)))
+				goto busy;
+
+			if (!TestClearPageLRU(page)) {
+				/*
+				 * This page may in other isolation path,
+				 * but we still hold lru_lock.
+				 */
+				put_page(page);
+				goto busy;
+			}
+
 			nr_taken += nr_pages;
 			nr_zone_taken[page_zonenum(page)] += nr_pages;
 			list_move(&page->lru, dst);
 			break;
 
-		case -EBUSY:
+		default:
+busy:
 			/* else it is being freed elsewhere */
 			list_move(&page->lru, src);
-			continue;
-
-		default:
-			BUG();
 		}
 	}
 



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux