On 2023/11/24 16:04, Huang, Ying wrote:
"zhangpeng (AS)" <zhangpeng362@xxxxxxxxxx> writes:
On 2023/11/24 12:26, Huang, Ying wrote:
"Huang, Ying" <ying.huang@xxxxxxxxx> writes:
"zhangpeng (AS)" <zhangpeng362@xxxxxxxxxx> writes:
On 2023/11/23 13:26, Yin Fengwei wrote:
On 11/23/23 12:12, zhangpeng (AS) wrote:
On 2023/11/23 9:09, Yin Fengwei wrote:
Hi Peng,
On 11/22/23 22:00, Peng Zhang wrote:
From: ZhangPeng <zhangpeng362@xxxxxxxxxx>
The major fault occurred when using mlockall(MCL_CURRENT | MCL_FUTURE)
in application, which leading to an unexpected performance issue[1].
This caused by temporarily cleared pte during a read/modify/write update
of the pte, eg, do_numa_page()/change_pte_range().
For the data segment of the user-mode program, the global variable area
is a private mapping. After the pagecache is loaded, the private anonymous
page is generated after the COW is triggered. Mlockall can lock COW pages
(anonymous pages), but the original file pages cannot be locked and may
be reclaimed. If the global variable (private anon page) is accessed when
vmf->pte is zeroed in numa fault, a file page fault will be triggered.
At this time, the original private file page may have been reclaimed.
If the page cache is not available at this time, a major fault will be
triggered and the file will be read, causing additional overhead.
Fix this by rechecking the pte by holding ptl in filemap_fault() before
triggering a major fault.
[1] https://lore.kernel.org/linux-mm/9e62fd9a-bee0-52bf-50a7-498fa17434ee@xxxxxxxxxx/
Signed-off-by: ZhangPeng <zhangpeng362@xxxxxxxxxx>
Signed-off-by: Kefeng Wang <wangkefeng.wang@xxxxxxxxxx>
---
mm/filemap.c | 14 ++++++++++++++
1 file changed, 14 insertions(+)
diff --git a/mm/filemap.c b/mm/filemap.c
index 71f00539ac00..bb5e6a2790dc 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -3226,6 +3226,20 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
mapping_locked = true;
}
} else {
+ pte_t *ptep = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd,
+ vmf->address, &vmf->ptl);
+ if (ptep) {
+ /*
+ * Recheck pte with ptl locked as the pte can be cleared
+ * temporarily during a read/modify/write update.
+ */
+ if (unlikely(!pte_none(ptep_get(ptep))))
+ ret = VM_FAULT_NOPAGE;
+ pte_unmap_unlock(ptep, vmf->ptl);
+ if (unlikely(ret))
+ return ret;
+ }
I am curious. Did you try not to take PTL here and just check whether PTE is not NONE?
Thank you for your reply.
If we don't take PTL, the current use case won't trigger this issue either.
Is this verified by testing or just in theory?
If we add a delay between ptep_modify_prot_start() and ptep_modify_prot_commit(),
this issue will also trigger. Without delay, we haven't reproduced this problem
so far.
In most cases, if we don't take PTL, this issue won't be triggered. However,
there is still a possibility of triggering this issue. The corner case is that
task 2 triggers a page fault when task 1 is between ptep_modify_prot_start()
and ptep_modify_prot_commit() in do_numa_page(). Furthermore,task 2 passes the
check whether the PTE is not NONE before task 1 updates PTE in
ptep_modify_prot_commit() without taking PTL.
There is very limited operations between ptep_modify_prot_start() and
ptep_modify_prot_commit(). While the code path from page fault to this check is
long. My understanding is it's very likely the PTE is not NONE when do PTE check
here without hold PTL (This is my theory. :)).
Yes, there is a high probability that this issue won't occur without taking PTL.
In the other side, acquiring/releasing PTL may bring performance impaction. It may
not be big deal because the IO operations in this code path. But it's better to
collect some performance data IMHO.
We tested the performance of file private mapping page fault (page_fault2.c of
will-it-scale [1]) and file shared mapping page fault (page_fault3.c of will-it-scale).
The difference in performance (in operations per second) before and after patch
applied is about 0.7% on a x86 physical machine.
Whether is it improvement or reduction?
And I think that you need to test ramdisk cases too to verify whether
this will cause performance regression and how much.
Yes, I will.
In addition, are there any ramdisk test cases recommended? 😁
I think that you can start with the will-it-scale test case you used
before. And you can try some workload with large number of major fault,
like file read with mmap.
I used will-it-scale to test the page faults of ext4 files and
tmpfs files. The data is the average change compared with the
mainline after the patch is applied. The test results are within
the range of fluctuation, and there is no obvious difference.
The test results are as follows:
processes processes_idle threads threads_idle
ext4 private file write: -0.51% 0.08% -0.03% -0.04%
ext4 shared file write: 0.135% -0.531% 2.883% -0.772%
tmpfs private file write: -0.344% -0.110% 0.200% 0.145%
tmpfs shared file write: 0.958% 0.101% 2.781% -0.337%
tmpfs private file read: -0.16% 0.00% -0.12% 0.41%
--
Best Regards,
Huang, Ying
--
Best Regards,
Huang, Ying
--
Best Regards,
Huang, Ying
[1] https://github.com/antonblanchard/will-it-scale/tree/master
Regards
Yin, Fengwei
Regards
Yin, Fengwei
+
/* No page in the page cache at all */
count_vm_event(PGMAJFAULT);
count_memcg_event_mm(vmf->vma->vm_mm, PGMAJFAULT);
--
Best Regards,
Peng