Since we just increase a constance of 1 to node penalty, it is not necessary to multiply MAX_NODE_LOAD for preference. This patch also remove the definition. [vbabka@xxxxxxx: suggests] Signed-off-by: Wei Yang <richard.weiyang@xxxxxxxxx> CC: Vlastimil Babka <vbabka@xxxxxxx> CC: David Hildenbrand <david@xxxxxxxxxx> CC: Oscar Salvador <osalvador@xxxxxxx> --- mm/page_alloc.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 86b6573fbeb5..ca6a127bbc26 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -6170,7 +6170,6 @@ int numa_zonelist_order_handler(struct ctl_table *table, int write, } -#define MAX_NODE_LOAD (nr_online_nodes) static int node_load[MAX_NUMNODES]; /** @@ -6217,7 +6216,7 @@ int find_next_best_node(int node, nodemask_t *used_node_mask) val += PENALTY_FOR_NODE_WITH_CPUS; /* Slight preference for less loaded node */ - val *= (MAX_NODE_LOAD*MAX_NUMNODES); + val *= MAX_NUMNODES; val += node_load[n]; if (val < min_val) { -- 2.33.1