Hi Junil, On Fri, Jan 15, 2016 at 09:36:24AM +0900, Junil Lee wrote: > To prevent unlock at the not correct situation, tagging the new obj to > assure lock in migrate_zspage() before right unlock path. > > Two functions are in race condition by tag which set 1 on last bit of > obj, however unlock succrently when update new obj to handle before call > unpin_tag() which is right unlock path. > > summarize this problem by call flow as below: > > CPU0 CPU1 > migrate_zspage > find_alloced_obj() > trypin_tag() -- obj |= HANDLE_PIN_BIT > obj_malloc() -- new obj is not set zs_free > record_obj() -- unlock and break sync pin_tag() -- get lock > unpin_tag() It's really good catch! I think it should be stable material. For that, we should know this patch fixes what kinds of problem. What do you see problem? I mean please write down the oops you saw and verify that the patch fixes your problem. :) Minor nit below > > Signed-off-by: Junil Lee <junil0814.lee@xxxxxxx> > --- > mm/zsmalloc.c | 1 + > 1 file changed, 1 insertion(+) > > diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c > index e7414ce..bb459ef 100644 > --- a/mm/zsmalloc.c > +++ b/mm/zsmalloc.c > @@ -1635,6 +1635,7 @@ static int migrate_zspage(struct zs_pool *pool, struct size_class *class, > free_obj = obj_malloc(d_page, class, handle); > zs_object_copy(free_obj, used_obj, class); > index++; > + free_obj |= BIT(HANDLE_PIN_BIT); > record_obj(handle, free_obj); I think record_obj should store free_obj to *handle with masking off least bit. IOW, how about this? record_obj(handle, obj) { *(unsigned long)handle = obj & ~(1<<HANDLE_PIN_BIT); } Thanks a lot! -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>