On Mon, Dec 6, 2021 at 5:19 AM Kirill Tkhai <ktkhai@xxxxxxxxxxxxx> wrote: > > On 06.12.2021 13:45, David Hildenbrand wrote: > >> This doesn't seen complete. Slab shrinkers are used in the reclaim > >> context. Previously offline nodes could be onlined later and this would > >> lead to NULL ptr because there is no hook to allocate new shrinker > >> infos. This would be also really impractical because this would have to > >> update all existing memcgs... > > > > Instead of going through the trouble of updating... > > > > ... maybe just keep for_each_node() and check if the target node is > > offline. If it's offline, just allocate from the first online node. > > After all, we're not using __GFP_THISNODE, so there are no guarantees > > either way ... > > Hm, can't we add shrinker maps allocation to __try_online_node() in addition > to this patch? I think the below fix (an example, doesn't cover all affected callsites) should be good enough for now? It doesn't touch the hot path of the page allocator. diff --git a/mm/vmscan.c b/mm/vmscan.c index fb9584641ac7..1252a33f7c28 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -222,13 +222,15 @@ static int expand_one_shrinker_info(struct mem_cgroup *memcg, int size = map_size + defer_size; for_each_node(nid) { + int tmp = nid; pn = memcg->nodeinfo[nid]; old = shrinker_info_protected(memcg, nid); /* Not yet online memcg */ if (!old) return 0; - - new = kvmalloc_node(sizeof(*new) + size, GFP_KERNEL, nid); + if (!node_online(nid)) + tmp = -1; + new = kvmalloc_node(sizeof(*new) + size, GFP_KERNEL, tmp); if (!new) return -ENOMEM; It used to use kvmalloc instead of kvmalloc_node(). The commit 86daf94efb11d7319fbef5e480018c4807add6ef ("mm/memcontrol.c: allocate shrinker_map on appropriate NUMA node") changed to use *_node() version. The justification was that "kswapd is always bound to specific node. So allocate shrinker_map from the related NUMA node to respect its NUMA locality." There is no kswapd for offlined node, so just allocate shrinker info on node 0. This is also what alloc_mem_cgroup_per_node_info() does. Making memcg per node data node allocation memory hotplug aware should be solved in a separate patchset IMHO.