Cc Andrew, On (01/15/16 11:35), Minchan Kim wrote: [..] > > Signed-off-by: Junil Lee <junil0814.lee@xxxxxxx> > > --- > > mm/zsmalloc.c | 1 + > > 1 file changed, 1 insertion(+) > > > > diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c > > index e7414ce..bb459ef 100644 > > --- a/mm/zsmalloc.c > > +++ b/mm/zsmalloc.c > > @@ -1635,6 +1635,7 @@ static int migrate_zspage(struct zs_pool *pool, struct size_class *class, > > free_obj = obj_malloc(d_page, class, handle); > > zs_object_copy(free_obj, used_obj, class); > > index++; > > + free_obj |= BIT(HANDLE_PIN_BIT); > > record_obj(handle, free_obj); > > I think record_obj should store free_obj to *handle with masking off least bit. > IOW, how about this? > > record_obj(handle, obj) > { > *(unsigned long)handle = obj & ~(1<<HANDLE_PIN_BIT); > } [just a wild idea] or zs_free() can take spin_lock(&class->lock) earlier, it cannot free the object until the class is locked anyway, and migration is happening with the locked class. extending class->lock scope in zs_free() thus should not affect the perfomance. so it'll be either zs_free() is touching the object or the migration, not both. -ss -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>