From: Huang Ying <ying.huang@xxxxxxxxx> In a memory tiering system, if the memory size of the workloads is smaller than that of the faster memory (e.g. DRAM) nodes, all pages of the workloads should be put in the faster memory nodes. But this makes it unnecessary to use slower memory (e.g. PMEM) at all. So in common cases, the memory size of the workload should be larger than that of the faster memory nodes. And to optimize the performance, the hot pages should be promoted to the faster memory nodes while the cold pages should be demoted to the slower memory nodes. To achieve that, we have two choices, a. Promote the hot pages from the slower memory node to the faster memory node. This will create some memory pressure in the faster memory node, thus trigger the memory reclaiming, where the cold pages will be demoted to the slower memory node. b. Demote the cold pages from faster memory node to the slower memory node. This will create some free memory space in the faster memory node, and the hot pages in the slower memory node could be promoted to the faster memory node. The choice "a" will create the memory pressure in the faster memory node. If the memory pressure of the workload is high too, the memory pressure may become so high that the memory allocation latency of the workload is influenced, e.g. the direct reclaiming may be triggered. The choice "b" works much better at this aspect. If the memory pressure of the workload is high, it will consume the free memory and the hot pages promotion will stop earlier if its allocation watermark is higher than that of the normal memory allocation. In this patch, choice "b" is implemented. If memory tiering NUMA balancing mode is enabled, the node isn't the slowest node, and the free memory size of the node is below the high watermark, the kswapd of the node will be waken up to free some memory until the free memory size is above the high watermark + autonuma promotion rate limit. If the free memory size is below the high watermark, autonuma promotion will stop working. This avoids to create too much memory pressure to the system. Signed-off-by: "Huang, Ying" <ying.huang@xxxxxxxxx> Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> Cc: Michal Hocko <mhocko@xxxxxxxx> Cc: Rik van Riel <riel@xxxxxxxxxx> Cc: Mel Gorman <mgorman@xxxxxxx> Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx> Cc: Ingo Molnar <mingo@xxxxxxxxxx> Cc: Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx> Cc: Dan Williams <dan.j.williams@xxxxxxxxx> Cc: Fengguang Wu <fengguang.wu@xxxxxxxxx> Cc: linux-kernel@xxxxxxxxxxxxxxx Cc: linux-mm@xxxxxxxxx --- mm/migrate.c | 26 +++++++++++++++++--------- mm/vmscan.c | 7 +++++++ 2 files changed, 24 insertions(+), 9 deletions(-) diff --git a/mm/migrate.c b/mm/migrate.c index 8f06bd37d927..dd9e31416c10 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -47,6 +47,7 @@ #include <linux/page_owner.h> #include <linux/sched/mm.h> #include <linux/ptrace.h> +#include <linux/sched/sysctl.h> #include <asm/tlbflush.h> @@ -1939,8 +1940,7 @@ COMPAT_SYSCALL_DEFINE6(move_pages, pid_t, pid, compat_ulong_t, nr_pages, * Returns true if this is a safe migration target node for misplaced NUMA * pages. Currently it only checks the watermarks which crude */ -static bool migrate_balanced_pgdat(struct pglist_data *pgdat, - unsigned long nr_migrate_pages) +static bool migrate_balanced_pgdat(struct pglist_data *pgdat, int order) { int z; @@ -1951,12 +1951,9 @@ static bool migrate_balanced_pgdat(struct pglist_data *pgdat, continue; /* Avoid waking kswapd by allocating pages_to_migrate pages. */ - if (!zone_watermark_ok(zone, 0, - high_wmark_pages(zone) + - nr_migrate_pages, - ZONE_MOVABLE, 0)) - continue; - return true; + if (zone_watermark_ok(zone, order, high_wmark_pages(zone), + ZONE_MOVABLE, 0)) + return true; } return false; } @@ -1983,8 +1980,19 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page) VM_BUG_ON_PAGE(compound_order(page) && !PageTransHuge(page), page); /* Avoid migrating to a node that is nearly full */ - if (!migrate_balanced_pgdat(pgdat, 1UL << compound_order(page))) + if (!migrate_balanced_pgdat(pgdat, compound_order(page))) { + if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING) { + int z; + + for (z = pgdat->nr_zones - 1; z >= 0; z--) { + if (populated_zone(pgdat->node_zones + z)) + break; + } + wakeup_kswapd(pgdat->node_zones + z, + 0, compound_order(page), ZONE_MOVABLE); + } return 0; + } if (isolate_lru_page(page)) return 0; diff --git a/mm/vmscan.c b/mm/vmscan.c index 6867b49ce360..ecc7f66ee2c3 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -57,6 +57,7 @@ #include <linux/swapops.h> #include <linux/balloon_compaction.h> +#include <linux/sched/sysctl.h> #include "internal.h" @@ -3336,8 +3337,11 @@ static bool pgdat_balanced(pg_data_t *pgdat, int order, int classzone_idx) { int i; unsigned long mark = -1; + unsigned long promote_ratelimit; struct zone *zone; + promote_ratelimit = sysctl_numa_balancing_rate_limit << + (20 - PAGE_SHIFT); /* * Check watermarks bottom-up as lower zones are more likely to * meet watermarks. @@ -3349,6 +3353,9 @@ static bool pgdat_balanced(pg_data_t *pgdat, int order, int classzone_idx) continue; mark = high_wmark_pages(zone); + if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING && + next_migration_node(pgdat->node_id) != -1) + mark += promote_ratelimit; if (zone_watermark_ok_safe(zone, order, mark, classzone_idx)) return true; } -- 2.23.0