On 11/24/2017 05:34 AM, Minchan Kim wrote: > Shakeel Butt reported, he have observed in production system that > the job loader gets stuck for 10s of seconds while doing mount > operation. It turns out that it was stuck in register_shrinker() > and some unrelated job was under memory pressure and spending time > in shrink_slab(). Machines have a lot of shrinkers registered and > jobs under memory pressure has to traverse all of those memcg-aware > shrinkers and do affect unrelated jobs which want to register their > own shrinkers. > > To solve the issue, this patch simply bails out slab shrinking > once it found someone want to register shrinker in parallel. > A downside is it could cause unfair shrinking between shrinkers. > However, it should be rare and we can add compilcated logic once > we found it's not enough. > > Link: http://lkml.kernel.org/r/20171115005602.GB23810@bbox > Cc: Michal Hocko <mhocko@xxxxxxxx> > Cc: Tetsuo Handa <penguin-kernel@xxxxxxxxxxxxxxxxxxx> > Acked-by: Johannes Weiner <hannes@xxxxxxxxxxx> > Reported-and-tested-by: Shakeel Butt <shakeelb@xxxxxxxxxx> > Signed-off-by: Shakeel Butt <shakeelb@xxxxxxxxxx> > Signed-off-by: Minchan Kim <minchan@xxxxxxxxxx> > --- > mm/vmscan.c | 8 ++++++++ > 1 file changed, 8 insertions(+) > > diff --git a/mm/vmscan.c b/mm/vmscan.c > index 6a5a72baccd5..6698001787bd 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -486,6 +486,14 @@ static unsigned long shrink_slab(gfp_t gfp_mask, int nid, > sc.nid = 0; > > freed += do_shrink_slab(&sc, shrinker, priority); > + /* > + * bail out if someone want to register a new shrinker to > + * prevent long time stall by parallel ongoing shrinking. > + */ > + if (rwsem_is_contended(&shrinker_rwsem)) { > + freed = freed ? : 1; > + break; > + } This is similar to when it aborts for not being able to grab the shrinker_rwsem at the beginning. if (!down_read_trylock(&shrinker_rwsem)) { /* * If we would return 0, our callers would understand that we * have nothing else to shrink and give up trying. By returning * 1 we keep it going and assume we'll be able to shrink next * time. */ freed = 1; goto out; } Right now, shrink_slab() is getting called from three places. Twice in shrink_node() and once in drop_slab_node(). But the return value from shrink_slab() is checked only inside drop_slab_node() and it has some heuristics to decide whether to keep on scanning over available memcg shrinkers registered. The question is does aborting here will still guarantee forward progress for all the contexts which might be attempting to allocate memory and had eventually invoked shrink_slab() ? Because may be the memory allocation request has more priority than a context getting bit delayed while being stuck waiting on shrinker_rwsem. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>