Yu mentioned at [1] about the mlock() can't be applied to large folio. I leant the related code and here is my understanding: - For RLIMIT_MEMLOCK related, there is no problem. Becuase the RLIMIT_MEMLOCK statistics is not related underneath page. That means underneath page mlock or munlock doesn't impact the RLIMIT_MEMLOCK statistics collection which is always correct. - For keeping the page in RAM, there is no problem either. At least, during try_to_unmap_one(), once detect the VMA has VM_LOCKED bit set in vm_flags, the folio will be kept whatever the folio is mlocked or not. So the function of mlock for large folio works. But it's not optimized because the page reclaim needs scan these large folio and may split them. This series identified the large folio for mlock to two types: - The large folio is in VM_LOCKED VMA range - The large folio cross VM_LOCKED VMA boundary For the first type, we mlock large folio so page relcaim will skip it. For the second type, we don't mlock large folio. It's allowed to be picked by page reclaim and be split. So the pages not in VM_LOCKED VMA range are allowed to be reclaimed/released. patch1 introduce API to check whether large folio is in VMA range. patch2 make page reclaim/mlock_vma_folio/munlock_vma_folio support large folio mlock/munlock. patch3 make mlock/munlock syscall support large folio. testing done: - kernel selftest. No extra failure introduced RFC v2 was post here [2]. Yu also mentioned a race which can make folio unevictable after munlock during RFC v2 discussion [3]: We decided that race issue didn't block this series based on: - That race issue was not introduced by this sereis - We had a looks-ok fix for that race issue. Need to wait for mlock_count fixing patch as Yosry Ahmed suggested [4] ChangeLog from RFC v2: - Removed RFC - dropped folio_is_large() check as suggested by both Yu and Huge - Besides the address/pgoff check, also check the page table entry when check whether the folio is in the range. This is to handle mremap case that address/pgoff is in range, but folio can't be identified as in range. - Fixed one issue in page_add_anon_rmap() and page_add_anon_rmap() introdued by RFC v2. As these two functions can be called multiple times against one folio. And remove_rmap() may not be called same times. Which can bring imbalanced mlock_count. Fix it by skip mlock large folio in these two functions. [1] https://lore.kernel.org/linux-mm/CAOUHufbtNPkdktjt_5qM45GegVO-rCFOMkSh0HQminQ12zsV8Q@xxxxxxxxxxxxxx/ [2] https://lore.kernel.org/linux-mm/20230712060144.3006358-1-fengwei.yin@xxxxxxxxx/ [3] https://lore.kernel.org/linux-mm/CAOUHufZ6=9P_=CAOQyw0xw-3q707q-1FVV09dBNDC-hpcpj2Pg@xxxxxxxxxxxxxx/ [4] https://lore.kernel.org/linux-mm/CAJD7tkZJFG=7xs=9otc5CKs6odWu48daUuZP9Wd9Z-sZF07hXg@xxxxxxxxxxxxxx/ Yin Fengwei (3): mm: add functions folio_in_range() and folio_within_vma() mm: handle large folio when large folio in VM_LOCKED VMA range mm: mlock: update mlock_pte_range to handle large folio mm/internal.h | 87 +++++++++++++++++++++++++++++++++++++++++++++------ mm/mlock.c | 57 +++++++++++++++++++++++++++++++-- mm/rmap.c | 27 +++++++++++----- 3 files changed, 153 insertions(+), 18 deletions(-) -- 2.39.2