[RFC PATCH 3/4] mm: zap_pte_range optimise fullmm handling for dirty shared pages

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Shared dirty pages do not need to be flushed under page table lock
for the fullmm case, because there will be no subsequent access
through the TLBs.
---
 mm/memory.c | 14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/mm/memory.c b/mm/memory.c
index 1161ed3f1d0b..490689909186 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1322,8 +1322,18 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb,
 
 			if (!PageAnon(page)) {
 				if (pte_dirty(ptent)) {
-					force_flush = 1;
-					locked_flush = 1;
+					/*
+					 * Page must be flushed from TLBs
+					 * before releasing PTL to synchronize
+					 * with page_mkclean and avoid another
+					 * thread writing to the page through
+					 * the old TLB after it was marked
+					 * clean.
+					 */
+					if (!tlb->fullmm) {
+						force_flush = 1;
+						locked_flush = 1;
+					}
 					set_page_dirty(page);
 				}
 				if (pte_young(ptent) &&
-- 
2.17.0




[Index of Archives]     [Linux Kernel]     [Kernel Newbies]     [x86 Platform Driver]     [Netdev]     [Linux Wireless]     [Netfilter]     [Bugtraq]     [Linux Filesystems]     [Yosemite Discussion]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Device Mapper]

  Powered by Linux