Hi Sergey, On Wed, Jul 15, 2015 at 01:07:03PM +0900, Sergey Senozhatsky wrote: > On (07/11/15 18:45), Sergey Senozhatsky wrote: > [..] > > We re-do this calculations during compaction on a per class basis > > anyway. > > > > zs_unregister_shrinker() will not return until we have an active > > shrinker, so classes won't unexpectedly disappear while > > zs_pages_to_compact(), invoked by zs_shrinker_count(), iterates > > them. > > > > When called from zram, we are protected by zram's ->init_lock, > > so, again, classes will be there until zs_pages_to_compact() > > iterates them. > > > > Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@xxxxxxxxx> > > --- > > mm/zsmalloc.c | 2 -- > > 1 file changed, 2 deletions(-) > > > > diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c > > index b10a228..824c182 100644 > > --- a/mm/zsmalloc.c > > +++ b/mm/zsmalloc.c > > @@ -1811,9 +1811,7 @@ unsigned long zs_pages_to_compact(struct zs_pool *pool) > > if (class->index != i) > > continue; > > > > - spin_lock(&class->lock); > > pages_to_free += zs_can_compact(class); > > - spin_unlock(&class->lock); > > } > > > > return pages_to_free; > > This patch still makes sense. Agree? There is already race window between shrink_count and shrink_slab so it would be okay if we return stale stat with removing the lock if the difference is not huge. Even, now we don't obey nr_to_scan of shrinker in zs_shrinker_scan so the such accuracy would be pointless. Please resend the patch and correct zs_can_compact's comment. Thanks. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>