On Wed 03-01-24 08:48:37, Dan Schatzberg wrote: [...] > diff --git a/mm/vmscan.c b/mm/vmscan.c > index d91963e2d47f..394e0dd46b2e 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -92,6 +92,11 @@ struct scan_control { > unsigned long anon_cost; > unsigned long file_cost; > > +#ifdef CONFIG_MEMCG > + /* Swappiness value for proactive reclaim. Always use sc_swappiness()! */ > + int *proactive_swappiness; > +#endif > + > /* Can active folios be deactivated as part of reclaim? */ > #define DEACTIVATE_ANON 1 > #define DEACTIVATE_FILE 2 > @@ -227,6 +232,13 @@ static bool writeback_throttling_sane(struct scan_control *sc) > #endif > return false; > } > + > +static int sc_swappiness(struct scan_control *sc, struct mem_cgroup *memcg) > +{ > + if (sc->proactive && sc->proactive_swappiness) > + return *sc->proactive_swappiness; > + return mem_cgroup_swappiness(memcg); > +} If you really want to make this sc->proactive bound then do not use CONFIG_MEMCG as sc->proactive is not guarded either. I do not think that sc->proactive check is really necessary. A pure NULL check is sufficient to have a valid and self evident code that is future proof. But TBH this is not the most important aspect of the patch to spend much more time discussing. Either go with sc->proactive but make it config space consistent or simply rely on NULL check (with or without MEMCG guard as both are valid options). -- Michal Hocko SUSE Labs