David Rientjes <rientjes@xxxxxxxxxx> writes: > On Mon, 29 Jun 2020, Dave Hansen wrote: > >> From: Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx> >> >> If a memory node has a preferred migration path to demote cold pages, >> attempt to move those inactive pages to that migration node before >> reclaiming. This will better utilize available memory, provide a faster >> tier than swapping or discarding, and allow such pages to be reused >> immediately without IO to retrieve the data. >> >> When handling anonymous pages, this will be considered before swap if >> enabled. Should the demotion fail for any reason, the page reclaim >> will proceed as if the demotion feature was not enabled. >> > > Thanks for sharing these patches and kick-starting the conversation, Dave. > > Could this cause us to break a user's mbind() or allow a user to > circumvent their cpuset.mems? > > Because we don't have a mapping of the page back to its allocation > context (or the process context in which it was allocated), it seems like > both are possible. For mbind, I think we don't have enough information during reclaim to enforce the node binding policy. But for cpuset, if cgroup v2 (with the unified hierarchy) is used, it's possible to get the node binding policy via something like, cgroup_get_e_css(page->mem_cgroup, &cpuset_cgrp_subsys) > So let's assume that migration nodes cannot be other DRAM nodes. > Otherwise, memory pressure could be intentionally or unintentionally > induced to migrate these pages to another node. Do we have such a > restriction on migration nodes? > >> Some places we would like to see this used: >> >> 1. Persistent memory being as a slower, cheaper DRAM replacement >> 2. Remote memory-only "expansion" NUMA nodes >> 3. Resolving memory imbalances where one NUMA node is seeing more >> allocation activity than another. This helps keep more recent >> allocations closer to the CPUs on the node doing the allocating. >> > > (3) is the concerning one given the above if we are to use > migrate_demote_mapping() for DRAM node balancing. > >> Yang Shi's patches used an alternative approach where to-be-discarded >> pages were collected on a separate discard list and then discarded >> as a batch with migrate_pages(). This results in simpler code and >> has all the performance advantages of batching, but has the >> disadvantage that pages which fail to migrate never get swapped. >> >> #Signed-off-by: Keith Busch <keith.busch@xxxxxxxxx> >> Signed-off-by: Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx> >> Cc: Keith Busch <kbusch@xxxxxxxxxx> >> Cc: Yang Shi <yang.shi@xxxxxxxxxxxxxxxxx> >> Cc: David Rientjes <rientjes@xxxxxxxxxx> >> Cc: Huang Ying <ying.huang@xxxxxxxxx> >> Cc: Dan Williams <dan.j.williams@xxxxxxxxx> >> --- >> >> b/include/linux/migrate.h | 6 ++++ >> b/include/trace/events/migrate.h | 3 +- >> b/mm/debug.c | 1 >> b/mm/migrate.c | 52 +++++++++++++++++++++++++++++++++++++++ >> b/mm/vmscan.c | 25 ++++++++++++++++++ >> 5 files changed, 86 insertions(+), 1 deletion(-) >> >> diff -puN include/linux/migrate.h~0008-mm-vmscan-Attempt-to-migrate-page-in-lieu-of-discard include/linux/migrate.h >> --- a/include/linux/migrate.h~0008-mm-vmscan-Attempt-to-migrate-page-in-lieu-of-discard 2020-06-29 16:34:38.950312604 -0700 >> +++ b/include/linux/migrate.h 2020-06-29 16:34:38.963312604 -0700 >> @@ -25,6 +25,7 @@ enum migrate_reason { >> MR_MEMPOLICY_MBIND, >> MR_NUMA_MISPLACED, >> MR_CONTIG_RANGE, >> + MR_DEMOTION, >> MR_TYPES >> }; >> >> @@ -78,6 +79,7 @@ extern int migrate_huge_page_move_mappin >> struct page *newpage, struct page *page); >> extern int migrate_page_move_mapping(struct address_space *mapping, >> struct page *newpage, struct page *page, int extra_count); >> +extern int migrate_demote_mapping(struct page *page); >> #else >> >> static inline void putback_movable_pages(struct list_head *l) {} >> @@ -104,6 +106,10 @@ static inline int migrate_huge_page_move >> return -ENOSYS; >> } >> >> +static inline int migrate_demote_mapping(struct page *page) >> +{ >> + return -ENOSYS; >> +} >> #endif /* CONFIG_MIGRATION */ >> >> #ifdef CONFIG_COMPACTION >> diff -puN include/trace/events/migrate.h~0008-mm-vmscan-Attempt-to-migrate-page-in-lieu-of-discard include/trace/events/migrate.h >> --- a/include/trace/events/migrate.h~0008-mm-vmscan-Attempt-to-migrate-page-in-lieu-of-discard 2020-06-29 16:34:38.952312604 -0700 >> +++ b/include/trace/events/migrate.h 2020-06-29 16:34:38.963312604 -0700 >> @@ -20,7 +20,8 @@ >> EM( MR_SYSCALL, "syscall_or_cpuset") \ >> EM( MR_MEMPOLICY_MBIND, "mempolicy_mbind") \ >> EM( MR_NUMA_MISPLACED, "numa_misplaced") \ >> - EMe(MR_CONTIG_RANGE, "contig_range") >> + EM( MR_CONTIG_RANGE, "contig_range") \ >> + EMe(MR_DEMOTION, "demotion") >> >> /* >> * First define the enums in the above macros to be exported to userspace >> diff -puN mm/debug.c~0008-mm-vmscan-Attempt-to-migrate-page-in-lieu-of-discard mm/debug.c >> --- a/mm/debug.c~0008-mm-vmscan-Attempt-to-migrate-page-in-lieu-of-discard 2020-06-29 16:34:38.954312604 -0700 >> +++ b/mm/debug.c 2020-06-29 16:34:38.963312604 -0700 >> @@ -25,6 +25,7 @@ const char *migrate_reason_names[MR_TYPE >> "mempolicy_mbind", >> "numa_misplaced", >> "cma", >> + "demotion", >> }; >> >> const struct trace_print_flags pageflag_names[] = { >> diff -puN mm/migrate.c~0008-mm-vmscan-Attempt-to-migrate-page-in-lieu-of-discard mm/migrate.c >> --- a/mm/migrate.c~0008-mm-vmscan-Attempt-to-migrate-page-in-lieu-of-discard 2020-06-29 16:34:38.956312604 -0700 >> +++ b/mm/migrate.c 2020-06-29 16:34:38.964312604 -0700 >> @@ -1151,6 +1151,58 @@ int next_demotion_node(int node) >> return node; >> } >> >> +static struct page *alloc_demote_node_page(struct page *page, unsigned long node) >> +{ >> + /* >> + * 'mask' targets allocation only to the desired node in the >> + * migration path, and fails fast if the allocation can not be >> + * immediately satisfied. Reclaim is already active and heroic >> + * allocation efforts are unwanted. >> + */ >> + gfp_t mask = GFP_NOWAIT | __GFP_NOWARN | __GFP_NORETRY | >> + __GFP_NOMEMALLOC | __GFP_THISNODE | __GFP_HIGHMEM | >> + __GFP_MOVABLE; > > GFP_NOWAIT has the side-effect that it does __GFP_KSWAPD_RECLAIM: do we > actually want to kick kswapd on the pmem node? I think it should be a good idea to kick kswapd on the PMEM node. Because otherwise, we will discard more pages in DRAM node. And in general, the DRAM pages are hotter than the PMEM pages, because the cold DRAM pages are migrated to the PMEM node. > If not, GFP_TRANSHUGE_LIGHT does a trick where it does > GFP_HIGHUSER_MOVABLE & ~__GFP_RECLAIM. You could probably do the same > here although the __GFP_IO and __GFP_FS would be unnecessary (but not > harmful). > >> + struct page *newpage; >> + >> + if (PageTransHuge(page)) { >> + mask |= __GFP_COMP; >> + newpage = alloc_pages_node(node, mask, HPAGE_PMD_ORDER); >> + if (newpage) >> + prep_transhuge_page(newpage); >> + } else >> + newpage = alloc_pages_node(node, mask, 0); >> + >> + return newpage; >> +} >> + Best Regards, Huang, Ying