On Tue 28-11-23 12:48:53, Kent Overstreet wrote: > On Tue, Nov 28, 2023 at 11:01:16AM +0100, Michal Hocko wrote: > > On Wed 22-11-23 18:25:07, Kent Overstreet wrote: > > [...] > > > +void shrinkers_to_text(struct seq_buf *out) > > > +{ > > > + struct shrinker *shrinker; > > > + struct shrinker_by_mem { > > > + struct shrinker *shrinker; > > > + unsigned long mem; > > > + } shrinkers_by_mem[10]; > > > + int i, nr = 0; > > > + > > > + if (!mutex_trylock(&shrinker_mutex)) { > > > + seq_buf_puts(out, "(couldn't take shrinker lock)"); > > > + return; > > > + } > > > + > > > + list_for_each_entry(shrinker, &shrinker_list, list) { > > > + struct shrink_control sc = { .gfp_mask = GFP_KERNEL, }; > > > > This seems to be global reclaim specific. What about memcg reclaim? > > I have no fsckin idea how memcg reclaim works - and, for that matter, > the recent lockless shrinking work seems to have neglected to write even > an iterator macro, leaving _that_ a nasty mess so I'm not touching that > either. OK, you could have made it more clearly that all of this is aiming at the global OOM handling. With an outlook on what should be done if this was ever required. Another thing you want to look into is a NUMA constrained OOM (mbind, cpuset) where this output could be actively misleading. -- Michal Hocko SUSE Labs