Changes in v1: - replace [RFC PATCH 1/7] with a separate serise (already merge into mm-unstable): https://lore.kernel.org/lkml/cover.1727332572.git.zhengqi.arch@xxxxxxxxxxxxx/ (suggested by David Hildenbrand) - squash [RFC PATCH 2/7] into [RFC PATCH 4/7] (suggested by David Hildenbrand) - change to scan and reclaim empty user PTE pages in zap_pte_range() (suggested by David Hildenbrand) - sent a separate RFC patch to track the tlb flushing issue, and remove that part form this series ([RFC PATCH 3/7] and [RFC PATCH 6/7]). link: https://lore.kernel.org/lkml/20240815120715.14516-1-zhengqi.arch@xxxxxxxxxxxxx/ - add [PATCH v1 1/7] into this series - drop RFC tag - rebase onto the next-20241011 Changes in RFC v2: - fix compilation errors in [RFC PATCH 5/7] and [RFC PATCH 7/7] reproted by kernel test robot - use pte_offset_map_nolock() + pmd_same() instead of check_pmd_still_valid() in retract_page_tables() (in [RFC PATCH 4/7]) - rebase onto the next-20240805 Hi all, Previously, we tried to use a completely asynchronous method to reclaim empty user PTE pages [1]. After discussing with David Hildenbrand, we decided to implement synchronous reclaimation in the case of madvise(MADV_DONTNEED) as the first step. So this series aims to synchronously free the empty PTE pages in madvise(MADV_DONTNEED) case. We will detect and free empty PTE pages in zap_pte_range(), and will add zap_details.reclaim_pt to exclude cases other than madvise(MADV_DONTNEED). In zap_pte_range(), mmu_gather is used to perform batch tlb flushing and page freeing operations. Therefore, if we want to free the empty PTE page in this path, the most natural way is to add it to mmu_gather as well. Now, if CONFIG_MMU_GATHER_RCU_TABLE_FREE is selected, mmu_gather will free page table pages by semi RCU: - batch table freeing: asynchronous free by RCU - single table freeing: IPI + synchronous free But this is not enough to free the empty PTE page table pages in paths other that munmap and exit_mmap path, because IPI cannot be synchronized with rcu_read_lock() in pte_offset_map{_lock}(). So we should let single table also be freed by RCU like batch table freeing. As a first step, we supported this feature on x86_64 and selectd the newly introduced CONFIG_ARCH_SUPPORTS_PT_RECLAIM. For other cases such as madvise(MADV_FREE), consider scanning and freeing empty PTE pages asynchronously in the future. This series is based on next-20241011 (which contains the series [2]). Comments and suggestions are welcome! Thanks, Qi [1]. https://lore.kernel.org/lkml/cover.1718267194.git.zhengqi.arch@xxxxxxxxxxxxx/ [2]. https://lore.kernel.org/lkml/cover.1727332572.git.zhengqi.arch@xxxxxxxxxxxxx/ Qi Zheng (7): mm: khugepaged: retract_page_tables() use pte_offset_map_lock() mm: make zap_pte_range() handle full within-PMD range mm: zap_install_uffd_wp_if_needed: return whether uffd-wp pte has been re-installed mm: zap_present_ptes: return whether the PTE page is unreclaimable mm: pgtable: try to reclaim empty PTE page in madvise(MADV_DONTNEED) x86: mm: free page table pages by RCU instead of semi RCU x86: select ARCH_SUPPORTS_PT_RECLAIM if X86_64 arch/x86/Kconfig | 1 + arch/x86/include/asm/tlb.h | 19 ++++++++ arch/x86/kernel/paravirt.c | 7 +++ arch/x86/mm/pgtable.c | 10 +++- include/linux/mm.h | 1 + include/linux/mm_inline.h | 11 +++-- mm/Kconfig | 14 ++++++ mm/Makefile | 1 + mm/internal.h | 29 ++++++++++++ mm/khugepaged.c | 9 +++- mm/madvise.c | 4 +- mm/memory.c | 95 +++++++++++++++++++++++++++++--------- mm/mmu_gather.c | 9 +++- mm/pt_reclaim.c | 68 +++++++++++++++++++++++++++ 14 files changed, 248 insertions(+), 30 deletions(-) create mode 100644 mm/pt_reclaim.c -- 2.20.1