The patch titled Subject: mm: factor out the numa mapping rebuilding into a new helper has been added to the -mm mm-unstable branch. Its filename is mm-factor-out-the-numa-mapping-rebuilding-into-a-new-helper.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-factor-out-the-numa-mapping-rebuilding-into-a-new-helper.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx> Subject: mm: factor out the numa mapping rebuilding into a new helper Date: Tue, 26 Mar 2024 19:51:24 +0800 Patch series "support multi-size THP numa balancing". This patchset tries to support mTHP numa balancing, as a simple solution to start, the NUMA balancing algorithm for mTHP will follow the THP strategy as the basic support. This patch (of 2): To support large folio's numa balancing, factor out the numa mapping rebuilding into a new helper as a preparation. Link: https://lkml.kernel.org/r/cover.1711453317.git.baolin.wang@xxxxxxxxxxxxxxxxx Link: https://lkml.kernel.org/r/0c9df470bd2a62af8cc707cf01e42c7c744dd927.1711453317.git.baolin.wang@xxxxxxxxxxxxxxxxx Signed-off-by: Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx> Cc: David Hildenbrand <david@xxxxxxxxxx> Cc: "Huang, Ying" <ying.huang@xxxxxxxxx> Cc: John Hubbard <jhubbard@xxxxxxxxxx> Cc: Kefeng Wang <wangkefeng.wang@xxxxxxxxxx> Cc: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx> Cc: Ryan Roberts <ryan.roberts@xxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/memory.c | 22 +++++++++++++++------- 1 file changed, 15 insertions(+), 7 deletions(-) --- a/mm/memory.c~mm-factor-out-the-numa-mapping-rebuilding-into-a-new-helper +++ a/mm/memory.c @@ -5054,6 +5054,20 @@ int numa_migrate_prep(struct folio *foli return mpol_misplaced(folio, vmf, addr); } +static void numa_rebuild_single_mapping(struct vm_fault *vmf, struct vm_area_struct *vma, + bool writable) +{ + pte_t pte, old_pte; + + old_pte = ptep_modify_prot_start(vma, vmf->address, vmf->pte); + pte = pte_modify(old_pte, vma->vm_page_prot); + pte = pte_mkyoung(pte); + if (writable) + pte = pte_mkwrite(pte, vma); + ptep_modify_prot_commit(vma, vmf->address, vmf->pte, old_pte, pte); + update_mmu_cache_range(vmf, vma, vmf->address, vmf->pte, 1); +} + static vm_fault_t do_numa_page(struct vm_fault *vmf) { struct vm_area_struct *vma = vmf->vma; @@ -5159,13 +5173,7 @@ out_map: * Make it present again, depending on how arch implements * non-accessible ptes, some can allow access by kernel mode. */ - old_pte = ptep_modify_prot_start(vma, vmf->address, vmf->pte); - pte = pte_modify(old_pte, vma->vm_page_prot); - pte = pte_mkyoung(pte); - if (writable) - pte = pte_mkwrite(pte, vma); - ptep_modify_prot_commit(vma, vmf->address, vmf->pte, old_pte, pte); - update_mmu_cache_range(vmf, vma, vmf->address, vmf->pte, 1); + numa_rebuild_single_mapping(vmf, vma, writable); pte_unmap_unlock(vmf->pte, vmf->ptl); goto out; } _ Patches currently in -mm which might be from baolin.wang@xxxxxxxxxxxxxxxxx are mm-record-the-migration-reason-for-struct-migration_target_control.patch mm-hugetlb-make-the-hugetlb-migration-strategy-consistent.patch docs-hugetlbpagerst-add-hugetlb-migration-description.patch mm-factor-out-the-numa-mapping-rebuilding-into-a-new-helper.patch mm-support-multi-size-thp-numa-balancing.patch