On 26.03.20 23:24, Wei Yang wrote: > Since we always clear used_mask before getting node order, we can > leverage compiler to do this instead of at run time. > > Signed-off-by: Wei Yang <richard.weiyang@xxxxxxxxx> > --- > mm/page_alloc.c | 3 +-- > 1 file changed, 1 insertion(+), 2 deletions(-) > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 0e823bca3f2f..2144b6ceb119 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -5587,14 +5587,13 @@ static void build_zonelists(pg_data_t *pgdat) > { > static int node_order[MAX_NUMNODES]; > int node, load, nr_nodes = 0; > - nodemask_t used_mask; > + nodemask_t used_mask = {.bits = {0}}; > int local_node, prev_node; > > /* NUMA-aware ordering of nodes */ > local_node = pgdat->node_id; > load = nr_online_nodes; > prev_node = local_node; > - nodes_clear(used_mask); > > memset(node_order, 0, sizeof(node_order)); > while ((node = find_next_best_node(local_node, &used_mask)) >= 0) { > t480s: ~/git/linux default_online_type $ git grep "nodemask_t " | grep "=" arch/x86/mm/numa.c: nodemask_t reserved_nodemask = NODE_MASK_NONE; arch/x86/mm/numa_emulation.c: nodemask_t physnode_mask = numa_nodes_parsed; arch/x86/mm/numa_emulation.c: nodemask_t physnode_mask = numa_nodes_parsed; arch/x86/mm/numa_emulation.c: nodemask_t physnode_mask = numa_nodes_parsed; drivers/acpi/numa/srat.c:static nodemask_t nodes_found_map = NODE_MASK_NONE; kernel/irq/affinity.c: nodemask_t nodemsk = NODE_MASK_NONE; kernel/sched/fair.c: nodemask_t max_group = NODE_MASK_NONE; mm/memory_hotplug.c: nodemask_t nmask = node_states[N_MEMORY]; mm/mempolicy.c: nodemask_t mems = cpuset_mems_allowed(current); mm/mempolicy.c: nodemask_t nodes = NODE_MASK_NONE; mm/oom_kill.c: const nodemask_t *mask = oc->nodemask; mm/page_alloc.c:nodemask_t node_states[NR_NODE_STATES] __read_mostly = { mm/page_alloc.c: nodemask_t saved_node_state = node_states[N_MEMORY]; Should this be NODE_MASK_NONE? -- Thanks, David / dhildenb