On (01/15/16 12:27), Sergey Senozhatsky wrote: > > > @@ -1635,6 +1635,7 @@ static int migrate_zspage(struct zs_pool *pool, struct size_class *class, > > > free_obj = obj_malloc(d_page, class, handle); > > > zs_object_copy(free_obj, used_obj, class); > > > index++; > > > + free_obj |= BIT(HANDLE_PIN_BIT); > > > record_obj(handle, free_obj); > > > > I think record_obj should store free_obj to *handle with masking off least bit. > > IOW, how about this? > > > > record_obj(handle, obj) > > { > > *(unsigned long)handle = obj & ~(1<<HANDLE_PIN_BIT); > > } > > [just a wild idea] > > or zs_free() can take spin_lock(&class->lock) earlier, it cannot free the > object until the class is locked anyway, and migration is happening with UNlocked > the locked class. extending class->lock scope in zs_free() thus should > not affect the perfomance. so it'll be either zs_free() is touching the > object or the migration, not both. -ss -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>