On 11/23/23 15:57, zhangpeng (AS) wrote: > On 2023/11/23 13:26, Yin Fengwei wrote: > >> On 11/23/23 12:12, zhangpeng (AS) wrote: >>> On 2023/11/23 9:09, Yin Fengwei wrote: >>> >>>> Hi Peng, >>>> >>>> On 11/22/23 22:00, Peng Zhang wrote: >>>>> From: ZhangPeng <zhangpeng362@xxxxxxxxxx> >>>>> >>>>> The major fault occurred when using mlockall(MCL_CURRENT | MCL_FUTURE) >>>>> in application, which leading to an unexpected performance issue[1]. >>>>> >>>>> This caused by temporarily cleared pte during a read/modify/write update >>>>> of the pte, eg, do_numa_page()/change_pte_range(). >>>>> >>>>> For the data segment of the user-mode program, the global variable area >>>>> is a private mapping. After the pagecache is loaded, the private anonymous >>>>> page is generated after the COW is triggered. Mlockall can lock COW pages >>>>> (anonymous pages), but the original file pages cannot be locked and may >>>>> be reclaimed. If the global variable (private anon page) is accessed when >>>>> vmf->pte is zeroed in numa fault, a file page fault will be triggered. >>>>> >>>>> At this time, the original private file page may have been reclaimed. >>>>> If the page cache is not available at this time, a major fault will be >>>>> triggered and the file will be read, causing additional overhead. >>>>> >>>>> Fix this by rechecking the pte by holding ptl in filemap_fault() before >>>>> triggering a major fault. >>>>> >>>>> [1] https://lore.kernel.org/linux-mm/9e62fd9a-bee0-52bf-50a7-498fa17434ee@xxxxxxxxxx/ >>>>> >>>>> Signed-off-by: ZhangPeng <zhangpeng362@xxxxxxxxxx> >>>>> Signed-off-by: Kefeng Wang <wangkefeng.wang@xxxxxxxxxx> >>>>> --- >>>>> mm/filemap.c | 14 ++++++++++++++ >>>>> 1 file changed, 14 insertions(+) >>>>> >>>>> diff --git a/mm/filemap.c b/mm/filemap.c >>>>> index 71f00539ac00..bb5e6a2790dc 100644 >>>>> --- a/mm/filemap.c >>>>> +++ b/mm/filemap.c >>>>> @@ -3226,6 +3226,20 @@ vm_fault_t filemap_fault(struct vm_fault *vmf) >>>>> mapping_locked = true; >>>>> } >>>>> } else { >>>>> + pte_t *ptep = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd, >>>>> + vmf->address, &vmf->ptl); >>>>> + if (ptep) { >>>>> + /* >>>>> + * Recheck pte with ptl locked as the pte can be cleared >>>>> + * temporarily during a read/modify/write update. >>>>> + */ >>>>> + if (unlikely(!pte_none(ptep_get(ptep)))) >>>>> + ret = VM_FAULT_NOPAGE; >>>>> + pte_unmap_unlock(ptep, vmf->ptl); >>>>> + if (unlikely(ret)) >>>>> + return ret; >>>>> + } >>>> I am curious. Did you try not to take PTL here and just check whether PTE is not NONE? >>> Thank you for your reply. >>> >>> If we don't take PTL, the current use case won't trigger this issue either. >> Is this verified by testing or just in theory? > > If we add a delay between ptep_modify_prot_start() and ptep_modify_prot_commit(), > this issue will also trigger. Without delay, we haven't reproduced this problem > so far. Thanks for the testing. > >>> In most cases, if we don't take PTL, this issue won't be triggered. However, >>> there is still a possibility of triggering this issue. The corner case is that >>> task 2 triggers a page fault when task 1 is between ptep_modify_prot_start() >>> and ptep_modify_prot_commit() in do_numa_page(). Furthermore,task 2 passes the >>> check whether the PTE is not NONE before task 1 updates PTE in >>> ptep_modify_prot_commit() without taking PTL. >> There is very limited operations between ptep_modify_prot_start() and >> ptep_modify_prot_commit(). While the code path from page fault to this check is >> long. My understanding is it's very likely the PTE is not NONE when do PTE check >> here without hold PTL (This is my theory. :)). > > Yes, there is a high probability that this issue won't occur without taking PTL. > >> In the other side, acquiring/releasing PTL may bring performance impaction. It may >> not be big deal because the IO operations in this code path. But it's better to >> collect some performance data IMHO. > > We tested the performance of file private mapping page fault (page_fault2.c of > will-it-scale [1]) and file shared mapping page fault (page_fault3.c of will-it-scale). > The difference in performance (in operations per second) before and after patch > applied is about 0.7% on a x86 physical machine. > > [1] https://github.com/antonblanchard/will-it-scale/tree/master Maybe include this performance related information to commit message? For the code change, looks good to me. Reviewed-by: Yin Fengwei <fengwei.yin@xxxxxxxxx> Regards Yin, Fengwei > >> >> Regards >> Yin, Fengwei >> >>>> Regards >>>> Yin, Fengwei >>>> >>>>> + >>>>> /* No page in the page cache at all */ >>>>> count_vm_event(PGMAJFAULT); >>>>> count_memcg_event_mm(vmf->vma->vm_mm, PGMAJFAULT); >