[Cc David. I have only now noticed he has replied to this thread already pointing out the offline->online case] On Mon 06-12-21 10:23:00, Michal Hocko wrote: > On Sun 05-12-21 22:33:38, Nico Pache wrote: > > We have run into a panic caused by a shrinker allocation being attempted > > on an offlined node. > > > > Our crash analysis has determined that the issue originates from trying > > to allocate pages on an offlined node in expand_one_shrinker_info. This > > function makes the incorrect assumption that we can allocate on any node. > > To correct this we make sure we only itterate over online nodes. > > > > This assumption can lead to an incorrect address being assigned to ac->zonelist > > in the following callchain: > > __alloc_pages > > -> prepare_alloc_pages > > -> node_zonelist > > > > static inline struct zonelist *node_zonelist(int nid, gfp_t flags) > > { > > return NODE_DATA(nid)->node_zonelists + gfp_zonelist(flags); > > } > > if the node is not online the return of node_zonelist will evaluate to a > > invalid pointer of 0x00000 + offset_of(node_zonelists) + (1|0) > > > > This address is then dereferenced further down the callchain in: > > prepare_alloc_pages > > -> first_zones_zonelist > > -> next_zones_zonelist > > -> zonelist_zone_idx > > > > static inline int zonelist_zone_idx(struct zoneref *zoneref) > > { > > return zoneref->zone_idx; > > } > > > > Leading the system to panic. > > Thanks for the analysis! Please also add an oops report so that this is > easier to search for. It would be also interesting to see specifics > about the issue. Why was the specific node !online in the first place? > What architecture was this on? > > > We also correct this behavior in alloc_shrinker_info, free_shrinker_info, > > and reparent_shrinker_deferred. > > > > Fixes: 2bfd36374edd ("mm: vmscan: consolidate shrinker_maps handling code") > > Fixes: 0a4465d34028 ("mm, memcg: assign memcg-aware shrinkers bitmap to memcg") > > Normally I would split the fix as it is fixing two issues one introduced > in 4.19 the other in 5.13. > > > Signed-off-by: Nico Pache <npache@xxxxxxxxxx> > > --- > > mm/vmscan.c | 8 ++++---- > > 1 file changed, 4 insertions(+), 4 deletions(-) > > > > diff --git a/mm/vmscan.c b/mm/vmscan.c > > index fb9584641ac7..731564b61e3f 100644 > > --- a/mm/vmscan.c > > +++ b/mm/vmscan.c > > @@ -221,7 +221,7 @@ static int expand_one_shrinker_info(struct mem_cgroup *memcg, > > int nid; > > int size = map_size + defer_size; > > > > - for_each_node(nid) { > > + for_each_online_node(nid) { > > pn = memcg->nodeinfo[nid]; > > old = shrinker_info_protected(memcg, nid); > > /* Not yet online memcg */ > > @@ -256,7 +256,7 @@ void free_shrinker_info(struct mem_cgroup *memcg) > > struct shrinker_info *info; > > int nid; > > > > - for_each_node(nid) { > > + for_each_online_node(nid) { > > pn = memcg->nodeinfo[nid]; > > info = rcu_dereference_protected(pn->shrinker_info, true); > > kvfree(info); > > @@ -274,7 +274,7 @@ int alloc_shrinker_info(struct mem_cgroup *memcg) > > map_size = shrinker_map_size(shrinker_nr_max); > > defer_size = shrinker_defer_size(shrinker_nr_max); > > size = map_size + defer_size; > > - for_each_node(nid) { > > + for_each_online_node(nid) { > > info = kvzalloc_node(sizeof(*info) + size, GFP_KERNEL, nid); > > if (!info) { > > free_shrinker_info(memcg); > > @@ -417,7 +417,7 @@ void reparent_shrinker_deferred(struct mem_cgroup *memcg) > > > > /* Prevent from concurrent shrinker_info expand */ > > down_read(&shrinker_rwsem); > > - for_each_node(nid) { > > + for_each_online_node(nid) { > > child_info = shrinker_info_protected(memcg, nid); > > parent_info = shrinker_info_protected(parent, nid); > > for (i = 0; i < shrinker_nr_max; i++) { > > -- > > 2.33.1 > > This doesn't seen complete. Slab shrinkers are used in the reclaim > context. Previously offline nodes could be onlined later and this would > lead to NULL ptr because there is no hook to allocate new shrinker > infos. This would be also really impractical because this would have to > update all existing memcgs... > > To be completely honest I am not really sure this is a practical problem > because some architectures allocate (aka make online) all possible nodes > reported by the platform. There are major inconsistencies there. Maybe > that should be unified, so that problems like this one do not really > have to add a complexity to the code. > > -- > Michal Hocko > SUSE Labs -- Michal Hocko SUSE Labs