On Tue, 20 Mar 2018, Christopher Lameter wrote: > On Tue, 20 Mar 2018, Matthew Wilcox wrote: > > > On Tue, Mar 20, 2018 at 01:25:09PM -0400, Mikulas Patocka wrote: > > > The reason why we need this is that we are going to merge code that does > > > block device deduplication (it was developed separatedly and sold as a > > > commercial product), and the code uses block sizes that are not a power of > > > two (block sizes 192K, 448K, 640K, 832K are used in the wild). The slab > > > allocator rounds up the allocation to the nearest power of two, but that > > > wastes a lot of memory. Performance of the solution depends on efficient > > > memory usage, so we should minimize wasted as much as possible. > > > > The SLUB allocator also falls back to using the page (buddy) allocator > > for allocations above 8kB, so this patch is going to have no effect on > > slub. You'd be better off using alloc_pages_exact() for this kind of > > size, or managing your own pool of pages by using something like five > > 192k blocks in a 1MB allocation. > > The fallback is only effective for kmalloc caches. Manually created caches > do not follow this rule. Yes - the dm-bufio layer uses manually created caches. > Note that you can already control the page orders for allocation and > the objects per slab using > > slub_min_order > slub_max_order > slub_min_objects > > This is documented in linux/Documentation/vm/slub.txt > > Maybe do the same thing for SLAB? Yes, but I need to change it for a specific cache, not for all caches. When the order is greater than 3 (PAGE_ALLOC_COSTLY_ORDER), the allocation becomes unreliable, thus it is a bad idea to increase slub_max_order system-wide. Another problem with slub_max_order is that it would pad all caches to slub_max_order, even those that already have a power-of-two size (in that case, the padding is counterproductive). BTW. the function "order_store" in mm/slub.c modifies the structure kmem_cache without taking any locks - is it a bug? Mikulas