On 10/18/21 05:38, Rustam Kovhaev wrote: > Let's prepend all allocations of (PAGE_SIZE - align_offset) and less > with the size header. This way kmem_cache_alloc() memory can be freed > with kfree() and the other way around, as long as they are less than > (PAGE_SIZE - align_offset). This size limitation seems like an unnecessary gotcha. Couldn't we make these large allocations in slob_alloc_node() (that use slob_new_pages() directly) similar enough to large kmalloc() ones, so that kfree() can recognize them and free properly? AFAICS it might mean just adding __GFP_COMP to make sure there's a compound order stored, as these already don't seem to set PageSlab. > The main reason for this change is to simplify SLOB a little bit, make > it a bit easier to debug whenever something goes wrong. I would say the main reason is to simplify the slab API and guarantee that both kmem_cache_alloc() and kmalloc() can be freed by kfree(). We should also update the comments at top of slob.c to reflect the change. And Documentation/core-api/memory-allocation.rst (the last paragraph). > meminfo right after the system boot, without the patch: > Slab: 35500 kB > > the same, with the patch: > Slab: 36396 kB 2.5% increase, hopefully acceptable. Thanks!