Re: [PATCH 3/5] mm/slub: remove slab_lock() usage for debug operations

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Aug 12, 2022 at 11:14:24AM +0200, Vlastimil Babka wrote:
> All alloc and free operations on debug caches are now serialized by
> n->list_lock, so we can remove slab_lock() usage in validate_slab()
> and list_slab_objects() as those also happen under n->list_lock.
> 
> Note the usage in list_slab_objects() could happen even on non-debug
> caches, but only during cache shutdown time so there should not be any
> parallel freeing activity anymore. Except for buggy slab users, but in
> that case the slab_lock() would not help against the common cmpxchg
> based fast paths (in non-debug caches) anyway.
> 
> Also adjust documentation comments accordingly.
> 
> Suggested-by: Hyeonggon Yoo <42.hyeyoo@xxxxxxxxx>
> Signed-off-by: Vlastimil Babka <vbabka@xxxxxxx>
> ---
>  mm/slub.c | 19 ++++++++-----------
>  1 file changed, 8 insertions(+), 11 deletions(-)
> 
> diff --git a/mm/slub.c b/mm/slub.c
> index fa7efd2d98be..32b79bc3ae6d 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -50,7 +50,7 @@
>   *   1. slab_mutex (Global Mutex)
>   *   2. node->list_lock (Spinlock)
>   *   3. kmem_cache->cpu_slab->lock (Local lock)
> - *   4. slab_lock(slab) (Only on some arches or for debugging)
> + *   4. slab_lock(slab) (Only on some arches)
>   *   5. object_map_lock (Only for debugging)
>   *
>   *   slab_mutex
> @@ -64,8 +64,9 @@
>   *   The slab_lock is a wrapper around the page lock, thus it is a bit
>   *   spinlock.
>   *
> - *   The slab_lock is only used for debugging and on arches that do not
> - *   have the ability to do a cmpxchg_double. It only protects:
> + *   The slab_lock is only used on arches that do not have the ability
> + *   to do a cmpxchg_double. It only protects:
> + *
>   *	A. slab->freelist	-> List of free objects in a slab
>   *	B. slab->inuse		-> Number of objects in use
>   *	C. slab->objects	-> Number of objects in slab
> @@ -94,6 +95,9 @@
>   *   allocating a long series of objects that fill up slabs does not require
>   *   the list lock.
>   *
> + *   For debug caches, all allocations are forced to go through a list_lock
> + *   protected region to serialize against concurrent validation.
> + *
>   *   cpu_slab->lock local lock
>   *
>   *   This locks protect slowpath manipulation of all kmem_cache_cpu fields
> @@ -4369,7 +4373,6 @@ static void list_slab_objects(struct kmem_cache *s, struct slab *slab,
>  	void *p;
>  
>  	slab_err(s, slab, text, s->name);
> -	slab_lock(slab, &flags);
>  
>  	map = get_map(s, slab);
>  	for_each_object(p, s, addr, slab->objects) {
> @@ -4380,7 +4383,6 @@ static void list_slab_objects(struct kmem_cache *s, struct slab *slab,
>  		}
>  	}
>  	put_map(map);
> -	slab_unlock(slab, &flags);
>  #endif
>  }
>  
> @@ -5107,12 +5109,9 @@ static void validate_slab(struct kmem_cache *s, struct slab *slab,
>  {
>  	void *p;
>  	void *addr = slab_address(slab);
> -	unsigned long flags;
> -
> -	slab_lock(slab, &flags);
>  
>  	if (!check_slab(s, slab) || !on_freelist(s, slab, NULL))
> -		goto unlock;
> +		return;
>  
>  	/* Now we know that a valid freelist exists */
>  	__fill_map(obj_map, s, slab);
> @@ -5123,8 +5122,6 @@ static void validate_slab(struct kmem_cache *s, struct slab *slab,
>  		if (!check_object(s, slab, p, val))
>  			break;
>  	}
> -unlock:
> -	slab_unlock(slab, &flags);
>  }
>  
>  static int validate_slab_node(struct kmem_cache *s,
> -- 
> 2.37.1
> 

Looks good to me.
Reviewed-by: Hyeonggon Yoo <42.hyeyoo@xxxxxxxxx>

-- 
Thanks,
Hyeonggon




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux