Re: [MM PATCH V4.1 5/6] slub: support for bulk free with SLUB freelists

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 1 Oct 2015 15:10:15 -0700
Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> wrote:

> On Wed, 30 Sep 2015 13:44:19 +0200 Jesper Dangaard Brouer <brouer@xxxxxxxxxx> wrote:
> 
> > Make it possible to free a freelist with several objects by adjusting
> > API of slab_free() and __slab_free() to have head, tail and an objects
> > counter (cnt).
> > 
> > Tail being NULL indicate single object free of head object.  This
> > allow compiler inline constant propagation in slab_free() and
> > slab_free_freelist_hook() to avoid adding any overhead in case of
> > single object free.
> > 
> > This allows a freelist with several objects (all within the same
> > slab-page) to be free'ed using a single locked cmpxchg_double in
> > __slab_free() and with an unlocked cmpxchg_double in slab_free().
> > 
> > Object debugging on the free path is also extended to handle these
> > freelists.  When CONFIG_SLUB_DEBUG is enabled it will also detect if
> > objects don't belong to the same slab-page.
> > 
> > These changes are needed for the next patch to bulk free the detached
> > freelists it introduces and constructs.
> > 
> > Micro benchmarking showed no performance reduction due to this change,
> > when debugging is turned off (compiled with CONFIG_SLUB_DEBUG).
> > 
> 
> checkpatch says
> 
> WARNING: Avoid crashing the kernel - try using WARN_ON & recovery code rather than BUG() or BUG_ON()
> #205: FILE: mm/slub.c:2888:
> +       BUG_ON(!size);
> 
> 
> Linus will get mad at you if he finds out, and we wouldn't want that.
> 
> --- a/mm/slub.c~slub-optimize-bulk-slowpath-free-by-detached-freelist-fix
> +++ a/mm/slub.c
> @@ -2885,7 +2885,8 @@ static int build_detached_freelist(struc
>  /* Note that interrupts must be enabled when calling this function. */
>  void kmem_cache_free_bulk(struct kmem_cache *s, size_t size, void **p)
>  {
> -	BUG_ON(!size);
> +	if (WARN_ON(!size))
> +		return;
>  
>  	do {
>  		struct detached_freelist df;
> _

My problem with this change is that WARN_ON generates (slightly) larger
code size, which is critical for instruction-cache usage...

 [net-next-mm]$ ./scripts/bloat-o-meter vmlinux-with_BUG_ON vmlinux-with_WARN_ON 
 add/remove: 0/0 grow/shrink: 1/0 up/down: 17/0 (17)
 function                                     old     new   delta
 kmem_cache_free_bulk                         438     455     +17

My IP-forwarding benchmark is actually a very challenging use-case,
because the code path "size" a packet have to travel is larger than the
instruction-cache of the CPU.

Thus, I need introducing new code like this patch and at the same time
have to reduce the number of instruction-cache misses/usage.  In this
case we solve the problem by kmem_cache_free_bulk() not getting called
too often. Thus, +17 bytes will hopefully not matter too much... but on
the other hand we sort-of know that calling kmem_cache_free_bulk() will
cause icache misses.

-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  Author of http://www.iptv-analyzer.org
  LinkedIn: http://www.linkedin.com/in/brouer

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]