On Wed, Feb 26, 2020 at 4:15 AM Hillf Danton <hdanton@xxxxxxxx> wrote: > > > On Tue, 25 Feb 2020 14:30:03 -0800 Shakeel Butt wrote: > > > > BTW we are seeing a similar situation in our production environment. > > We have swappiness=0, no swap from kswapd (because we don't swapout on > > pressure, only on cold age) and too few file pages, the kswapd goes > > crazy on shrink_slab and spends 100% cpu on it. > > Dunno if swappiness is able to put peace on your kswapd. > > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -2631,8 +2631,14 @@ static inline bool should_continue_recla > */ > pages_for_compaction = compact_gap(sc->order); > inactive_lru_pages = node_page_state(pgdat, NR_INACTIVE_FILE); > - if (get_nr_swap_pages() > 0) > - inactive_lru_pages += node_page_state(pgdat, NR_INACTIVE_ANON); > + do { > + struct lruvec *lruvec = mem_cgroup_lruvec(sc->target_mem_cgroup, pgdat); > + struct mem_cgroup *memcg = lruvec_memcg(lruvec); > + int swappiness = mem_cgroup_swappiness(memcg); > + > + if (swappiness && get_nr_swap_pages() > 0) Thanks for finding this. I think we also need to check sc->may_swap as well. Can you please send a signed-off patch? It may or maynot help kswapd but I think this is needed. > + inactive_lru_pages += node_page_state(pgdat, NR_INACTIVE_ANON); > + } while (0); > > return inactive_lru_pages > pages_for_compaction; > } >