[PATCH v2] filemap: avoid unnecessary major faults in filemap_fault()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



From: ZhangPeng <zhangpeng362@xxxxxxxxxx>

The major fault occurred when using mlockall(MCL_CURRENT | MCL_FUTURE)
in application, which leading to an unexpected performance issue[1].

This caused by temporarily cleared PTE during a read+clear/modify/write
update of the PTE, eg, do_numa_page()/change_pte_range().

For the data segment of the user-mode program, the global variable area
is a private mapping. After the pagecache is loaded, the private anonymous
page is generated after the COW is triggered. Mlockall can lock COW pages
(anonymous pages), but the original file pages cannot be locked and may
be reclaimed. If the global variable (private anon page) is accessed when
vmf->pte is zeroed in numa fault, a file page fault will be triggered.

At this time, the original private file page may have been reclaimed.
If the page cache is not available at this time, a major fault will be
triggered and the file will be read, causing additional overhead.

Fix this by rechecking the PTE without acquiring PTL in filemap_fault()
before triggering a major fault.

Testing file anonymous page read and write page fault performance in ext4
and ramdisk using will-it-scale[2] on a x86 physical machine. The data
is the average change compared with the mainline after the patch is
applied. The test results are within the range of fluctuation, and there
is no obvious difference. The test results are as follows:
	                 processes processes_idle threads threads_idle
ext4    private file write: -1.14%  -0.08%         -1.87%   0.13%
ext4    shared  file write:  0.14%  -0.53%          2.88%  -0.77%
ext4    private file  read:  0.03%  -0.65%         -0.51%  -0.08%
tmpfs   private file write: -0.34%  -0.11%          0.20%   0.15%
tmpfs   shared  file write:  0.96%   0.10%          2.78%  -0.34%
ramdisk private file write: -1.21%  -0.21%         -1.12%   0.11%
ramdisk private file  read:  0.00%  -0.68%         -0.33%  -0.02%

[1] https://lore.kernel.org/linux-mm/9e62fd9a-bee0-52bf-50a7-498fa17434ee@xxxxxxxxxx/
[2] https://github.com/antonblanchard/will-it-scale/

Suggested-by: "Huang, Ying" <ying.huang@xxxxxxxxx>
Suggested-by: Yin Fengwei <fengwei.yin@xxxxxxxxx>
Signed-off-by: ZhangPeng <zhangpeng362@xxxxxxxxxx>
Signed-off-by: Kefeng Wang <wangkefeng.wang@xxxxxxxxxx>
---
v1->v2:
- Add more test results per Huang, Ying
- Add more comments before check PTE per Huang, Ying, David Hildenbrand
  and Yin Fengwei
- Change pte_offset_map_nolock to pte_offset_map as the ptl lock won't
  be used

RFC->v1:
- Add error handling when ptep == NULL per Huang, Ying and Matthew
  Wilcox
- Check the PTE without acquiring PTL in filemap_fault(), suggested by
  Huang, Ying and Yin Fengwei
- Add pmd_none() check before PTE map
- Update commit message and add performance test information

 mm/filemap.c | 34 ++++++++++++++++++++++++++++++++++
 1 file changed, 34 insertions(+)

diff --git a/mm/filemap.c b/mm/filemap.c
index 142864338ca4..a2c1a98bc771 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -3238,6 +3238,40 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
 			mapping_locked = true;
 		}
 	} else {
+		if (!pmd_none(*vmf->pmd)) {
+			pte_t *ptep;
+
+			ptep = pte_offset_map(vmf->pmd, vmf->address);
+			if (unlikely(!ptep))
+				return VM_FAULT_NOPAGE;
+			/*
+			 * Recheck PTE as the PTE can be cleared temporarily
+			 * during a read+clear/modify/write update of the PTE,
+			 * eg, do_numa_page()/change_pte_range(). This will
+			 * trigger a major fault, even if we use mlockall,
+			 * which may affect performance.
+			 * We don't hold PTL here as acquiring PTL hurts
+			 * performance. So the check is still racy, but
+			 * the race window seems small enough.
+			 *
+			 * If we lose the race during the check, the page_fault
+			 * will be triggered. Butthe page table entry lock
+			 * still make sure the correctness:
+			 * - If the page cache is not reclaimed, the page_fault
+			 *   will work like the page fault was served already
+			 *   and bail out.
+			 * - If the page cache is reclaimed, the major fault
+			 *   will be triggered, page cache is filled,
+			 *   page_fault also work like the page fault was
+			 *   served already and bail out.
+			 */
+			if (unlikely(!pte_none(ptep_get_lockless(ptep))))
+				ret = VM_FAULT_NOPAGE;
+			pte_unmap(ptep);
+			if (unlikely(ret))
+				return ret;
+		}
+
 		/* No page in the page cache at all */
 		count_vm_event(PGMAJFAULT);
 		count_memcg_event_mm(vmf->vma->vm_mm, PGMAJFAULT);
-- 
2.25.1





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux