This series contain three aspects: 1. cleanup and code sharing between SLUB and SLAB 2. implementing accelerated bulk API for SLAB allocator 3. new API kfree_bulk() Reviewers please review the changed order of debug calls in the SLAB allocator, as they are changed to do the same as the SLUB allocator. Patchset based on top Linus tree at commit ee9a7d2cb0cf1. --- Jesper Dangaard Brouer (10): slub: cleanup code for kmem cgroup support to kmem_cache_free_bulk mm/slab: move SLUB alloc hooks to common mm/slab.h mm: fault-inject take over bootstrap kmem_cache check slab: use slab_pre_alloc_hook in SLAB allocator shared with SLUB mm: kmemcheck skip object if slab allocation failed slab: use slab_post_alloc_hook in SLAB allocator shared with SLUB slab: implement bulk alloc in SLAB allocator slab: avoid running debug SLAB code with IRQs disabled for alloc_bulk slab: implement bulk free in SLAB allocator mm: new API kfree_bulk() for SLAB+SLUB allocators include/linux/fault-inject.h | 5 +- include/linux/slab.h | 8 +++ mm/failslab.c | 11 +++- mm/kmemcheck.c | 3 + mm/slab.c | 121 +++++++++++++++++++++++++++--------------- mm/slab.h | 62 ++++++++++++++++++++++ mm/slab_common.c | 8 ++- mm/slub.c | 92 +++++++++----------------------- 8 files changed, 194 insertions(+), 116 deletions(-) -- -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>