On Sun, 21 Nov 2021, Rustam Kovhaev wrote: > Let's prepend both kmalloc() and kmem_cache_alloc() allocations with the > size header. > It simplifies the slab API and guarantees that both kmem_cache_alloc() > and kmalloc() memory could be freed by kfree(). > > meminfo right after the system boot, x86-64 on xfs, without the patch: > Slab: 34700 kB > > the same, with the patch: > Slab: 35752 kB > +#define SLOB_HDR_SIZE max(ARCH_KMALLOC_MINALIGN, ARCH_SLAB_MINALIGN) Ok that is up to 128 bytes on some architectues. Mostly 32 or 64 bytes. > @@ -307,6 +303,7 @@ static void *slob_alloc(size_t size, gfp_t gfp, int align, int node, > unsigned long flags; > bool _unused; > > + size += SLOB_HDR_SIZE; And every object now has this overhead? 128 bytes extra in extreme cases per object? > - if (size < PAGE_SIZE - minalign) { > - int align = minalign; > + if (size < PAGE_SIZE - SLOB_HDR_SIZE) { > + int align = SLOB_HDR_SIZE; And the object is also aligned to 128 bytes boundaries on some architectures. So a 4 byte object occupies 256 bytes in SLOB? SLOB will no longer be a low memory overhead allocator then.