On Thu, Jun 04, 2020 at 03:42:55PM +0200, Uladzislau Rezki wrote: > On Thu, Jun 04, 2020 at 12:23:20PM +0200, Peter Enderborg wrote: > > The count and scan can be separated in time. It is a fair chance > > that all work is already done when the scan starts. It > > then might retry. This is can be avoided with returning SHRINK_STOP. > > > > Signed-off-by: Peter Enderborg <peter.enderborg@xxxxxxxx> > > --- > > kernel/rcu/tree.c | 2 +- > > 1 file changed, 1 insertion(+), 1 deletion(-) > > > > diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c > > index c716eadc7617..8b36c6b2887d 100644 > > --- a/kernel/rcu/tree.c > > +++ b/kernel/rcu/tree.c > > @@ -3310,7 +3310,7 @@ kfree_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc) > > break; > > } > > > > - return freed; > > + return freed == 0 ? SHRINK_STOP : freed; > > } > > > The loop will be stopped anyway sooner or later, but sooner is better :) > To me that change makes sense. > > Reviewed-by: Uladzislau Rezki (Sony) <urezki@xxxxxxxxx> Queued, thank you both! Thanx, Paul