On 03/21/2018 07:36 PM, Mikulas Patocka wrote: > > > On Wed, 21 Mar 2018, Christopher Lameter wrote: > >> On Wed, 21 Mar 2018, Mikulas Patocka wrote: >> >>>> You should not be using the slab allocators for these. Allocate higher >>>> order pages or numbers of consecutive smaller pagess from the page >>>> allocator. The slab allocators are written for objects smaller than page >>>> size. >>> >>> So, do you argue that I need to write my own slab cache functionality >>> instead of using the existing slab code? >> >> Just use the existing page allocator calls to allocate and free the >> memory you need. >> >>> I can do it - but duplicating code is bad thing. >> >> There is no need to duplicate anything. There is lots of infrastructure >> already in the kernel. You just need to use the right allocation / freeing >> calls. > > So, what would you recommend for allocating 640KB objects while minimizing > wasted space? > * alloc_pages - rounds up to the next power of two > * kmalloc - rounds up to the next power of two > * alloc_pages_exact - O(n*log n) complexity; and causes memory > fragmentation if used excesivelly > * vmalloc - horrible performance (modifies page tables and that causes > synchronization across all CPUs) > > anything else? > > The slab cache with large order seems as a best choice for this. Sorry for being late, I just read this thread and tend to agree with Mikulas, that this is a good use case for SL*B. If we extend the use-case from "space-efficient allocator of objects smaller than page size" to "space-efficient allocator of objects that are not power-of-two pages" then IMHO it turns out the implementation would be almost the same. All other variants listed above would lead to waste of memory or fragmentation. Would this perhaps be a good LSF/MM discussion topic? Mikulas, are you attending, or anyone else that can vouch for your usecase?