To prevent unlock at the not correct situation, tagging the new obj to assure lock in migrate_zspage() before right unlock path. Two functions are in race condition by tag which set 1 on last bit of obj, however unlock succrently when update new obj to handle before call unpin_tag() which is right unlock path. summarize this problem by call flow as below: CPU0 CPU1 migrate_zspage find_alloced_obj() trypin_tag() -- obj |= HANDLE_PIN_BIT obj_malloc() -- new obj is not set zs_free record_obj() -- unlock and break sync pin_tag() -- get lock unpin_tag() Signed-off-by: Junil Lee <junil0814.lee@xxxxxxx> --- mm/zsmalloc.c | 1 + 1 file changed, 1 insertion(+) diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index e7414ce..bb459ef 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -1635,6 +1635,7 @@ static int migrate_zspage(struct zs_pool *pool, struct size_class *class, free_obj = obj_malloc(d_page, class, handle); zs_object_copy(free_obj, used_obj, class); index++; + free_obj |= BIT(HANDLE_PIN_BIT); record_obj(handle, free_obj); unpin_tag(handle); obj_free(pool, class, used_obj); -- 2.6.2 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>