Re: [PATCH 2/5] mm, slub: fix mismatch between reconstructed freelist depth and cnt

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2021/10/5 17:57, Vlastimil Babka wrote:
> On 9/16/21 14:39, Miaohe Lin wrote:
>> If object's reuse is delayed, it will be excluded from the reconstructed
>> freelist. But we forgot to adjust the cnt accordingly. So there will be
>> a mismatch between reconstructed freelist depth and cnt. This will lead
>> to free_debug_processing() complain about freelist count or a incorrect
>> slub inuse count.
>>
>> Fixes: c3895391df38 ("kasan, slub: fix handling of kasan_slab_free hook")
>> Signed-off-by: Miaohe Lin <linmiaohe@xxxxxxxxxx>
> 
> Reviewed-by: Vlastimil Babka <vbabka@xxxxxxx>
> 
> I was worried about taking pointer of the cnt parameter when it's hardcoded
> 1, whether it would destroy inlining. Looks like not, luckily, the function
> is just renamed:
> 

Many thanks for your review and thoughtful consideration! :)

>> ./scripts/bloat-o-meter mm/slub.o slub.o.after
> add/remove: 1/1 grow/shrink: 0/0 up/down: 292/-292 (0)
> Function                                     old     new   delta
> slab_free_freelist_hook.constprop              -     292    +292
> slab_free_freelist_hook                      292       -    -292
> 
>> ---
>>  mm/slub.c | 11 +++++++++--
>>  1 file changed, 9 insertions(+), 2 deletions(-)
>>
>> diff --git a/mm/slub.c b/mm/slub.c
>> index ed160b6c54f8..a56a6423d4e8 100644
>> --- a/mm/slub.c
>> +++ b/mm/slub.c
>> @@ -1701,7 +1701,8 @@ static __always_inline bool slab_free_hook(struct kmem_cache *s,
>>  }
>>  
>>  static inline bool slab_free_freelist_hook(struct kmem_cache *s,
>> -					   void **head, void **tail)
>> +					   void **head, void **tail,
>> +					   int *cnt)
>>  {
>>  
>>  	void *object;
>> @@ -1728,6 +1729,12 @@ static inline bool slab_free_freelist_hook(struct kmem_cache *s,
>>  			*head = object;
>>  			if (!*tail)
>>  				*tail = object;
>> +		} else {
>> +			/*
>> +			 * Adjust the reconstructed freelist depth
>> +			 * accordingly if object's reuse is delayed.
>> +			 */
>> +			--(*cnt);
>>  		}
>>  	} while (object != old_tail);
>>  
>> @@ -3480,7 +3487,7 @@ static __always_inline void slab_free(struct kmem_cache *s, struct page *page,
>>  	 * With KASAN enabled slab_free_freelist_hook modifies the freelist
>>  	 * to remove objects, whose reuse must be delayed.
>>  	 */
>> -	if (slab_free_freelist_hook(s, &head, &tail))
>> +	if (slab_free_freelist_hook(s, &head, &tail, &cnt))
>>  		do_slab_free(s, page, head, tail, cnt, addr);
>>  }
>>  
>>
> 
> .
> 





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux