The patch titled Subject: mm/huge_memory: only split PMD mapping when necessary in unmap_folio() has been added to the -mm mm-unstable branch. Its filename is mm-huge_memory-only-split-pmd-mapping-when-necessary-in-unmap_folio.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-huge_memory-only-split-pmd-mapping-when-necessary-in-unmap_folio.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Zi Yan <ziy@xxxxxxxxxx> Subject: mm/huge_memory: only split PMD mapping when necessary in unmap_folio() Date: Mon, 26 Feb 2024 15:55:27 -0500 Patch series "Split a folio to any lower order folios", v5. File folio supports any order and multi-size THP is upstreamed[1], so both file and anonymous folios can be >0 order. Currently, split_huge_page() only splits a huge page to order-0 pages, but splitting to orders higher than 0 might better utilize large folios, if done properly. In addition, Large Block Sizes in XFS support would benefit from it during truncate[2]. This patchset adds support for splitting a large folio to any lower order folios. In addition to this implementation of split_huge_page_to_list_to_order(), a possible optimization could be splitting a large folio to arbitrary smaller folios instead of a single order. As both Hugh and Ryan pointed out [3,5] that split to a single order might not be optimal, an order-9 folio might be better split into 1 order-8, 1 order-7, ..., 1 order-1, and 2 order-0 folios, depending on subsequent folio operations. Leave this as future work. [1] https://lore.kernel.org/all/20231207161211.2374093-1-ryan.roberts@xxxxxxx/ [2] https://lore.kernel.org/linux-mm/20240226094936.2677493-1-kernel@xxxxxxxxxxxxxxxx/ [3] https://lore.kernel.org/linux-mm/9dd96da-efa2-5123-20d4-4992136ef3ad@xxxxxxxxxx/ [4] https://lore.kernel.org/linux-mm/cbb1d6a0-66dd-47d0-8733-f836fe050374@xxxxxxx/ [5] https://lore.kernel.org/linux-mm/20240213215520.1048625-1-zi.yan@xxxxxxxx/ This patch (of 8): As multi-size THP support is added, not all THPs are PMD-mapped, thus during a huge page split, there is no need to always split PMD mapping in unmap_folio(). Make it conditional. Link: https://lkml.kernel.org/r/20240226205534.1603748-1-zi.yan@xxxxxxxx Link: https://lkml.kernel.org/r/20240226205534.1603748-2-zi.yan@xxxxxxxx Signed-off-by: Zi Yan <ziy@xxxxxxxxxx> Cc: David Hildenbrand <david@xxxxxxxxxx> Cc: Hugh Dickins <hughd@xxxxxxxxxx> Cc: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx> Cc: Luis Chamberlain <mcgrof@xxxxxxxxxx> Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx> Cc: Michal Koutny <mkoutny@xxxxxxxx> Cc: Roman Gushchin <roman.gushchin@xxxxxxxxx> Cc: Ryan Roberts <ryan.roberts@xxxxxxx> Cc: Yang Shi <shy828301@xxxxxxxxx> Cc: Yu Zhao <yuzhao@xxxxxxxxxx> Cc: Zach O'Keefe <zokeefe@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/huge_memory.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) --- a/mm/huge_memory.c~mm-huge_memory-only-split-pmd-mapping-when-necessary-in-unmap_folio +++ a/mm/huge_memory.c @@ -2727,11 +2727,14 @@ void vma_adjust_trans_huge(struct vm_are static void unmap_folio(struct folio *folio) { - enum ttu_flags ttu_flags = TTU_RMAP_LOCKED | TTU_SPLIT_HUGE_PMD | - TTU_SYNC | TTU_BATCH_FLUSH; + enum ttu_flags ttu_flags = TTU_RMAP_LOCKED | TTU_SYNC | + TTU_BATCH_FLUSH; VM_BUG_ON_FOLIO(!folio_test_large(folio), folio); + if (folio_test_pmd_mappable(folio)) + ttu_flags |= TTU_SPLIT_HUGE_PMD; + /* * Anon pages need migration entries to preserve them, but file * pages can simply be left unmapped, then faulted back on demand. _ Patches currently in -mm which might be from ziy@xxxxxxxxxx are mm-huge_memory-only-split-pmd-mapping-when-necessary-in-unmap_folio.patch mm-memcg-use-order-instead-of-nr-in-split_page_memcg.patch mm-page_owner-use-order-instead-of-nr-in-split_page_owner.patch mm-memcg-make-memcg-huge-page-split-support-any-order-split.patch mm-page_owner-add-support-for-splitting-to-any-order-in-split-page_owner.patch mm-thp-split-huge-page-to-any-lower-order-pages.patch mm-huge_memory-enable-debugfs-to-split-huge-pages-to-any-order.patch