The patch titled Subject: mm: record the migration reason for struct migration_target_control has been added to the -mm mm-unstable branch. Its filename is mm-record-the-migration-reason-for-struct-migration_target_control.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-record-the-migration-reason-for-struct-migration_target_control.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx> Subject: mm: record the migration reason for struct migration_target_control Date: Wed, 6 Mar 2024 18:13:26 +0800 Patch series "make the hugetlb migration strategy consistent", v2. As discussed in previous thread [1], there is an inconsistency when handling hugetlb migration. When handling the migration of freed hugetlb, it prevents fallback to other NUMA nodes in alloc_and_dissolve_hugetlb_folio(). However, when dealing with in-use hugetlb, it allows fallback to other NUMA nodes in alloc_hugetlb_folio_nodemask(), which can break the per-node hugetlb pool and might result in unexpected failures when node bound workloads doesn't get what is asssumed available. This patchset tries to make the hugetlb migration strategy more clear and consistent. Please find details in each patch. [1] https://lore.kernel.org/all/6f26ce22d2fcd523418a085f2c588fe0776d46e7.1706794035.git.baolin.wang@xxxxxxxxxxxxxxxxx/ This patch (of 2): To support different hugetlb allocation strategies during hugetlb migration based on various migration reasons, record the migration reason in the migration_target_control structure as a preparation. Link: https://lkml.kernel.org/r/cover.1709719720.git.baolin.wang@xxxxxxxxxxxxxxxxx Link: https://lkml.kernel.org/r/7b95d4981e07211f57139fc5b1f7ce91b920cee4.1709719720.git.baolin.wang@xxxxxxxxxxxxxxxxx Signed-off-by: Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx> Reviewed-by: Oscar Salvador <osalvador@xxxxxxx> Cc: David Hildenbrand <david@xxxxxxxxxx> Cc: Miaohe Lin <linmiaohe@xxxxxxxxxx> Cc: Michal Hocko <mhocko@xxxxxxxxxx> Cc: Muchun Song <muchun.song@xxxxxxxxx> Cc: Naoya Horiguchi <naoya.horiguchi@xxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/gup.c | 1 + mm/internal.h | 1 + mm/memory-failure.c | 1 + mm/memory_hotplug.c | 1 + mm/mempolicy.c | 1 + mm/migrate.c | 1 + mm/page_alloc.c | 1 + mm/vmscan.c | 3 ++- 8 files changed, 9 insertions(+), 1 deletion(-) --- a/mm/gup.c~mm-record-the-migration-reason-for-struct-migration_target_control +++ a/mm/gup.c @@ -2326,6 +2326,7 @@ static int migrate_longterm_unpinnable_p struct migration_target_control mtc = { .nid = NUMA_NO_NODE, .gfp_mask = GFP_USER | __GFP_NOWARN, + .reason = MR_LONGTERM_PIN, }; if (migrate_pages(movable_page_list, alloc_migration_target, --- a/mm/internal.h~mm-record-the-migration-reason-for-struct-migration_target_control +++ a/mm/internal.h @@ -1045,6 +1045,7 @@ struct migration_target_control { int nid; /* preferred node id */ nodemask_t *nmask; gfp_t gfp_mask; + enum migrate_reason reason; }; /* --- a/mm/memory-failure.c~mm-record-the-migration-reason-for-struct-migration_target_control +++ a/mm/memory-failure.c @@ -2657,6 +2657,7 @@ static int soft_offline_in_use_page(stru struct migration_target_control mtc = { .nid = NUMA_NO_NODE, .gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL, + .reason = MR_MEMORY_FAILURE, }; if (!huge && folio_test_large(folio)) { --- a/mm/memory_hotplug.c~mm-record-the-migration-reason-for-struct-migration_target_control +++ a/mm/memory_hotplug.c @@ -1841,6 +1841,7 @@ static void do_migrate_range(unsigned lo struct migration_target_control mtc = { .nmask = &nmask, .gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL, + .reason = MR_MEMORY_HOTPLUG, }; int ret; --- a/mm/mempolicy.c~mm-record-the-migration-reason-for-struct-migration_target_control +++ a/mm/mempolicy.c @@ -1070,6 +1070,7 @@ static long migrate_to_node(struct mm_st struct migration_target_control mtc = { .nid = dest, .gfp_mask = GFP_HIGHUSER_MOVABLE | __GFP_THISNODE, + .reason = MR_SYSCALL, }; nodes_clear(nmask); --- a/mm/migrate.c~mm-record-the-migration-reason-for-struct-migration_target_control +++ a/mm/migrate.c @@ -2060,6 +2060,7 @@ static int do_move_pages_to_node(struct struct migration_target_control mtc = { .nid = node, .gfp_mask = GFP_HIGHUSER_MOVABLE | __GFP_THISNODE, + .reason = MR_SYSCALL, }; err = migrate_pages(pagelist, alloc_migration_target, NULL, --- a/mm/page_alloc.c~mm-record-the-migration-reason-for-struct-migration_target_control +++ a/mm/page_alloc.c @@ -6356,6 +6356,7 @@ int __alloc_contig_migrate_range(struct struct migration_target_control mtc = { .nid = zone_to_nid(cc->zone), .gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL, + .reason = MR_CONTIG_RANGE, }; struct page *page; unsigned long total_mapped = 0; --- a/mm/vmscan.c~mm-record-the-migration-reason-for-struct-migration_target_control +++ a/mm/vmscan.c @@ -967,7 +967,8 @@ static unsigned int demote_folio_list(st .gfp_mask = (GFP_HIGHUSER_MOVABLE & ~__GFP_RECLAIM) | __GFP_NOWARN | __GFP_NOMEMALLOC | GFP_NOWAIT, .nid = target_nid, - .nmask = &allowed_mask + .nmask = &allowed_mask, + .reason = MR_DEMOTION, }; if (list_empty(demote_folios)) _ Patches currently in -mm which might be from baolin.wang@xxxxxxxxxxxxxxxxx are mm-record-the-migration-reason-for-struct-migration_target_control.patch mm-hugetlb-make-the-hugetlb-migration-strategy-consistent.patch docs-hugetlbpagerst-add-hugetlb-migration-description.patch