[patch 11/21] Revert "mm: numa: defer TLB flush for THP migration as long as possible"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



From: Nadav Amit <namit@xxxxxxxxxx>
Subject: Revert "mm: numa: defer TLB flush for THP migration as long as possible"

While deferring TLB flushes is a good practice, the reverted patch caused
pending TLB flushes to be checked while the page-table lock is not taken. 
As a result, in architectures with weak memory model (PPC), Linux may miss
a memory-barrier, miss the fact TLB flushes are pending, and cause (in
theory) a memory corruption.

Since the alternative of using smp_mb__after_unlock_lock() was considered
a bit open-coded, and the performance impact is expected to be small, the
previous patch is reverted.

This reverts b0943d61b8fa4201 ("mm: numa: defer TLB flush for THP
migration as long as possible").

Link: http://lkml.kernel.org/r/20170802000818.4760-4-namit@xxxxxxxxxx
Signed-off-by: Nadav Amit <namit@xxxxxxxxxx>
Suggested-by: Mel Gorman <mgorman@xxxxxxx>
Acked-by: Mel Gorman <mgorman@xxxxxxx>
Acked-by: Rik van Riel <riel@xxxxxxxxxx>
Cc: Minchan Kim <minchan@xxxxxxxxxx>
Cc: Sergey Senozhatsky <sergey.senozhatsky@xxxxxxxxx>
Cc: Andy Lutomirski <luto@xxxxxxxxxx>
Cc: "David S. Miller" <davem@xxxxxxxxxxxxx>
Cc: Andrea Arcangeli <aarcange@xxxxxxxxxx>
Cc: Heiko Carstens <heiko.carstens@xxxxxxxxxx>
Cc: Hugh Dickins <hughd@xxxxxxxxxx>
Cc: Ingo Molnar <mingo@xxxxxxxxxx>
Cc: Jeff Dike <jdike@xxxxxxxxxxx>
Cc: Martin Schwidefsky <schwidefsky@xxxxxxxxxx>
Cc: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx>
Cc: Nadav Amit <nadav.amit@xxxxxxxxx>
Cc: Russell King <linux@xxxxxxxxxxxxxxx>
Cc: Tony Luck <tony.luck@xxxxxxxxx>
Cc: Yoshinori Sato <ysato@xxxxxxxxxxxxxxxxxxxx>
Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
---

 mm/huge_memory.c |    7 +++++++
 mm/migrate.c     |    6 ------
 2 files changed, 7 insertions(+), 6 deletions(-)

diff -puN mm/huge_memory.c~revert-mm-numa-defer-tlb-flush-for-thp-migration-as-long-as-possible mm/huge_memory.c
--- a/mm/huge_memory.c~revert-mm-numa-defer-tlb-flush-for-thp-migration-as-long-as-possible
+++ a/mm/huge_memory.c
@@ -1496,6 +1496,13 @@ int do_huge_pmd_numa_page(struct vm_faul
 	}
 
 	/*
+	 * The page_table_lock above provides a memory barrier
+	 * with change_protection_range.
+	 */
+	if (mm_tlb_flush_pending(vma->vm_mm))
+		flush_tlb_range(vma, haddr, haddr + HPAGE_PMD_SIZE);
+
+	/*
 	 * Migrate the THP to the requested node, returns with page unlocked
 	 * and access rights restored.
 	 */
diff -puN mm/migrate.c~revert-mm-numa-defer-tlb-flush-for-thp-migration-as-long-as-possible mm/migrate.c
--- a/mm/migrate.c~revert-mm-numa-defer-tlb-flush-for-thp-migration-as-long-as-possible
+++ a/mm/migrate.c
@@ -1937,12 +1937,6 @@ int migrate_misplaced_transhuge_page(str
 		put_page(new_page);
 		goto out_fail;
 	}
-	/*
-	 * We are not sure a pending tlb flush here is for a huge page
-	 * mapping or not. Hence use the tlb range variant
-	 */
-	if (mm_tlb_flush_pending(mm))
-		flush_tlb_range(vma, mmun_start, mmun_end);
 
 	/* Prepare a page as a migration target */
 	__SetPageLocked(new_page);
_
--
To unsubscribe from this list: send the line "unsubscribe mm-commits" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Kernel Archive]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]

  Powered by Linux