With the advent of various new memory types, some machines will have multiple types of memory, e.g. DRAM and PMEM (persistent memory). The memory subsystem of these machines can be called memory tiering system, because the performance of the different types of memory are usually different. In such system, because of the memory accessing pattern changing etc, some pages in the slow memory may become hot globally. So in this patch, the NUMA balancing mechanism is enhanced to optimize the page placement among the different memory types according to hot/cold dynamically. In a typical memory tiering system, there are CPUs, fast memory and slow memory in each physical NUMA node. The CPUs and the fast memory will be put in one logical node (called fast memory node), while the slow memory will be put in another (faked) logical node (called slow memory node). That is, the fast memory is regarded as local while the slow memory is regarded as remote. So it's possible for the recently accessed pages in the slow memory node to be promoted to the fast memory node via the existing NUMA balancing mechanism. The original NUMA balancing mechanism will stop to migrate pages if the free memory of the target node will become below the high watermark. This is a reasonable policy if there's only one memory type. But this makes the original NUMA balancing mechanism almost not work to optimize page placement among different memory types. Details are as follows. It's the common cases that the working-set size of the workload is larger than the size of the fast memory nodes. Otherwise, it's unnecessary to use the slow memory at all. So in the common cases, there are almost always no enough free pages in the fast memory nodes, so that the globally hot pages in the slow memory node cannot be promoted to the fast memory node. To solve the issue, we have 2 choices as follows, a. Ignore the free pages watermark checking when promoting hot pages from the slow memory node to the fast memory node. This will create some memory pressure in the fast memory node, thus trigger the memory reclaiming. So that, the cold pages in the fast memory node will be demoted to the slow memory node. b. Make kswapd of the fast memory node to reclaim pages until the free pages are a little more (about 10MB) than the high watermark. Then, if the free pages of the fast memory node reaches high watermark, and some hot pages need to be promoted, kswapd of the fast memory node will be waken up to demote some cold pages in the fast memory node to the slow memory node. This will free some extra space in the fast memory node, so the hot pages in the slow memory node can be promoted to the fast memory node. The choice "a" will create the memory pressure in the fast memory node. If the memory pressure of the workload is high, the memory pressure may become so high that the memory allocation latency of the workload is influenced, e.g. the direct reclaiming may be triggered. The choice "b" works much better at this aspect. If the memory pressure of the workload is high, the hot pages promotion will stop earlier because its allocation watermark is higher than that of the normal memory allocation. So in this patch, choice "b" is implemented. In addition to the original page placement optimization among sockets, the NUMA balancing mechanism is extended to be used to optimize page placement according to hot/cold among different memory types. So the sysctl user space interface (numa_balancing) is extended in a backward compatible way as follow, so that the users can enable/disable these functionality individually. The sysctl is converted from a Boolean value to a bits field. The definition of the flags is, - 0x0: NUMA_BALANCING_DISABLED - 0x1: NUMA_BALANCING_NORMAL - 0x2: NUMA_BALANCING_MEMORY_TIERING TODO: - Update ABI document: Documentation/sysctl/kernel.txt Signed-off-by: "Huang, Ying" <ying.huang@xxxxxxxxx> Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> Cc: Michal Hocko <mhocko@xxxxxxxx> Cc: Rik van Riel <riel@xxxxxxxxxx> Cc: Mel Gorman <mgorman@xxxxxxx> Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx> Cc: Ingo Molnar <mingo@xxxxxxxxxx> Cc: Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx> Cc: Dan Williams <dan.j.williams@xxxxxxxxx> Cc: linux-kernel@xxxxxxxxxxxxxxx Cc: linux-mm@xxxxxxxxx --- include/linux/sched/sysctl.h | 5 +++++ kernel/sched/core.c | 9 +++------ kernel/sysctl.c | 7 ++++--- mm/migrate.c | 30 +++++++++++++++++++++++++++--- mm/vmscan.c | 15 +++++++++++++++ 5 files changed, 54 insertions(+), 12 deletions(-) diff --git a/include/linux/sched/sysctl.h b/include/linux/sched/sysctl.h index 3c31ba88aca5..9d85450bc30a 100644 --- a/include/linux/sched/sysctl.h +++ b/include/linux/sched/sysctl.h @@ -39,6 +39,11 @@ enum sched_tunable_scaling { }; extern enum sched_tunable_scaling sysctl_sched_tunable_scaling; +#define NUMA_BALANCING_DISABLED 0x0 +#define NUMA_BALANCING_NORMAL 0x1 +#define NUMA_BALANCING_MEMORY_TIERING 0x2 + +extern int sysctl_numa_balancing_mode; extern unsigned int sysctl_numa_balancing_scan_delay; extern unsigned int sysctl_numa_balancing_scan_period_min; extern unsigned int sysctl_numa_balancing_scan_period_max; diff --git a/kernel/sched/core.c b/kernel/sched/core.c index ff74fca39ed2..d6aa29bb9a6c 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -3597,6 +3597,7 @@ static void __sched_fork(unsigned long clone_flags, struct task_struct *p) } DEFINE_STATIC_KEY_FALSE(sched_numa_balancing); +int sysctl_numa_balancing_mode; #ifdef CONFIG_NUMA_BALANCING @@ -3612,20 +3613,16 @@ void set_numabalancing_state(bool enabled) int sysctl_numa_balancing(struct ctl_table *table, int write, void *buffer, size_t *lenp, loff_t *ppos) { - struct ctl_table t; int err; - int state = static_branch_likely(&sched_numa_balancing); if (write && !capable(CAP_SYS_ADMIN)) return -EPERM; - t = *table; - t.data = &state; - err = proc_dointvec_minmax(&t, write, buffer, lenp, ppos); + err = proc_dointvec_minmax(table, write, buffer, lenp, ppos); if (err < 0) return err; if (write) - set_numabalancing_state(state); + set_numabalancing_state(*(int *)table->data); return err; } #endif diff --git a/kernel/sysctl.c b/kernel/sysctl.c index c9fbdd848138..10ddbe5a0326 100644 --- a/kernel/sysctl.c +++ b/kernel/sysctl.c @@ -113,6 +113,7 @@ static int sixty = 60; static int __maybe_unused neg_one = -1; static int __maybe_unused two = 2; +static int __maybe_unused three = 3; static int __maybe_unused four = 4; static unsigned long zero_ul; static unsigned long one_ul = 1; @@ -1755,12 +1756,12 @@ static struct ctl_table kern_table[] = { }, { .procname = "numa_balancing", - .data = NULL, /* filled in by handler */ - .maxlen = sizeof(unsigned int), + .data = &sysctl_numa_balancing_mode, + .maxlen = sizeof(int), .mode = 0644, .proc_handler = sysctl_numa_balancing, .extra1 = SYSCTL_ZERO, - .extra2 = SYSCTL_ONE, + .extra2 = &three, }, #endif /* CONFIG_NUMA_BALANCING */ #endif /* CONFIG_SCHED_DEBUG */ diff --git a/mm/migrate.c b/mm/migrate.c index a6bfc577fc8f..51c3f203a78f 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -50,6 +50,7 @@ #include <linux/ptrace.h> #include <linux/oom.h> #include <linux/memory.h> +#include <linux/sched/sysctl.h> #include <asm/tlbflush.h> @@ -2139,13 +2140,36 @@ static struct page *alloc_misplaced_dst_page(struct page *page, static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page) { - int page_lru; + int page_lru, nr = compound_nr(page), order = compound_order(page); - VM_BUG_ON_PAGE(compound_order(page) && !PageTransHuge(page), page); + VM_BUG_ON_PAGE(order && !PageTransHuge(page), page); /* Avoid migrating to a node that is nearly full */ - if (!migrate_balanced_pgdat(pgdat, compound_nr(page))) + if (!migrate_balanced_pgdat(pgdat, nr)) { + int migration_node, z; + pg_data_t *migration_pgdat; + + if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING) || + !(node_reclaim_mode & RECLAIM_MIGRATE)) + return 0; + /* + * The slow memory node need to have enough + * free pages to demote the cold pages in the + * fast memory node to it. + */ + migration_node = next_demotion_node(pgdat->node_id); + if (migration_node == NUMA_NO_NODE) + return 0; + migration_pgdat = NODE_DATA(migration_node); + if (!migrate_balanced_pgdat(migration_pgdat, nr)) + return 0; + for (z = pgdat->nr_zones - 1; z >= 0; z--) { + if (populated_zone(pgdat->node_zones + z)) + break; + } + wakeup_kswapd(pgdat->node_zones + z, 0, order, ZONE_MOVABLE); return 0; + } if (isolate_lru_page(page)) return 0; diff --git a/mm/vmscan.c b/mm/vmscan.c index 2f5ec6ffee17..de022fa24bb4 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -56,6 +56,7 @@ #include <linux/swapops.h> #include <linux/balloon_compaction.h> +#include <linux/sched/sysctl.h> #include "internal.h" @@ -3565,6 +3566,12 @@ static bool pgdat_watermark_boosted(pg_data_t *pgdat, int highest_zoneidx) return false; } +/* + * Keep the free pages on fast memory node a little more than the high + * watermark to accommodate the promoted pages. + */ +#define NUMA_BALANCING_ADDON_WATERMARK (10UL * 1024 * 1024 >> PAGE_SHIFT) + /* * Returns true if there is an eligible zone balanced for the request order * and highest_zoneidx @@ -3586,6 +3593,14 @@ static bool pgdat_balanced(pg_data_t *pgdat, int order, int highest_zoneidx) continue; mark = high_wmark_pages(zone); + if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING && + next_demotion_node(pgdat->node_id) != NUMA_NO_NODE) { + unsigned long addon_mark; + + addon_mark = min(NUMA_BALANCING_ADDON_WATERMARK, + pgdat->node_present_pages >> 6); + mark += addon_mark; + } if (zone_watermark_ok_safe(zone, order, mark, highest_zoneidx)) return true; } -- 2.30.1