Re: [PATCH 2/3] mm/zsmalloc: remove migrate_write_lock_nested()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2024/2/20 12:53, Sergey Senozhatsky wrote:
> On (24/02/20 12:51), Chengming Zhou wrote:
>> On 2024/2/20 12:48, Sergey Senozhatsky wrote:
>>> On (24/02/19 13:33), Chengming Zhou wrote:
>>>>  static void migrate_write_unlock(struct zspage *zspage)
>>>>  {
>>>>  	write_unlock(&zspage->lock);
>>>> @@ -2003,19 +1997,17 @@ static unsigned long __zs_compact(struct zs_pool *pool,
>>>>  			dst_zspage = isolate_dst_zspage(class);
>>>>  			if (!dst_zspage)
>>>>  				break;
>>>> -			migrate_write_lock(dst_zspage);
>>>>  		}
>>>>  
>>>>  		src_zspage = isolate_src_zspage(class);
>>>>  		if (!src_zspage)
>>>>  			break;
>>>>  
>>>> -		migrate_write_lock_nested(src_zspage);
>>>> -
>>>> +		migrate_write_lock(src_zspage);
>>>>  		migrate_zspage(pool, src_zspage, dst_zspage);
>>>> -		fg = putback_zspage(class, src_zspage);
>>>>  		migrate_write_unlock(src_zspage);
>>>>  
>>>> +		fg = putback_zspage(class, src_zspage);
>>>
>>> Hmm. Lockless putback doesn't look right to me. We modify critical
>>> zspage fileds in putback_zspage().
>>
>> Which I think is protected by pool->lock, right? We already held it.
> 
> Not really. We have, for example, the following patterns:
> 
> 	get_zspage_mapping()
> 	spin_lock(&pool->lock)

Right, this pattern is not safe actually, since we can't get stable fullness
value of zspage outside pool->lock.

But this pattern usage is only used in free_zspage path, so should be ok.
Actually we don't use the fullness value returned from get_zspage_mapping()
in the free_zspage() path, only use the class value to get the class.

Anyway, this pattern is confusing, I think we should clean up that?

Thanks.




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux