On Thu, May 18, 2023 at 12:15 PM Kent Overstreet <kent.overstreet@xxxxxxxxx> wrote: > > On Thu, May 18, 2023 at 12:03:03PM -0700, Song Liu wrote: > > On Thu, May 18, 2023 at 11:47 AM Song Liu <song@xxxxxxxxxx> wrote: > > > > > > On Thu, May 18, 2023 at 10:24 AM Kent Overstreet > > > <kent.overstreet@xxxxxxxxx> wrote: > > > > > > > > On Thu, May 18, 2023 at 10:00:39AM -0700, Song Liu wrote: > > > > > On Thu, May 18, 2023 at 9:48 AM Kent Overstreet > > > > > <kent.overstreet@xxxxxxxxx> wrote: > > > > > > > > > > > > On Thu, May 18, 2023 at 09:33:20AM -0700, Song Liu wrote: > > > > > > > I am working on patches based on the discussion in [1]. I am planning to > > > > > > > send v1 for review in a week or so. > > > > > > > > > > > > Hey Song, I was reviewing that thread too, > > > > > > > > > > > > Are you taking a different approach based on Thomas's feedback? I think > > > > > > he had some fair points in that thread. > > > > > > > > > > Yes, the API is based on Thomas's suggestion, like 90% from the discussions. > > > > > > > > > > > > > > > > > My own feeling is that the buddy allocator is our tool for allocating > > > > > > larger variable sized physically contiguous allocations, so I'd like to > > > > > > see something based on that - I think we could do a hybrid buddy/slab > > > > > > allocator approach, like we have for regular memory allocations. > > > > > > > > > > I am planning to implement the allocator based on this (reuse > > > > > vmap_area logic): > > > > > > > > Ah, you're still doing vmap_area approach. > > > > > > > > Mike's approach looks like it'll be _much_ lighter weight and higher > > > > performance, to me. vmalloc is known to be slow compared to the buddy > > > > allocator, and with Mike's approach we're only modifying mappings once > > > > per 2 MB chunk. > > > > > > > > I don't see anything in your code for sub-page sized allocations too, so > > > > perhaps I should keep going with my slab allocator. > > > > > > The vmap_area approach handles sub-page allocations. In 5/5 of set [2], > > > we showed that multiple BPF programs share the same page with some > > > kernel text (_etext). > > > > > > > Could you share your thoughts on your approach vs. Mike's? I'm newer to > > > > this area of the code than you two so maybe there's an angle I've missed > > > > :) > > > > > > AFAICT, tree based solution (vmap_area) is more efficient than bitmap > > > based solution. > > Tree based requires quite a bit of overhead for the rbtree pointers, and > additional vmap_area structs. > > With a buddy allocator based approach, there's no additional state that > needs to be allocated, since it all fits in struct page. > > > > First, for 2MiB page with 64B chunk size, we need a bitmap of > > > 2MiB / 64B = 32k bit = 4k bytes > > > While the tree based solution can adapt to the number of allocations within > > > This 2MiB page. Also, searching a free range within 4kB of bitmap may > > > actually be slower than searching in the tree. > > > > > > Second, bitmap based solution cannot handle > 2MiB allocation cleanly, > > > while tree based solution can. For example, if a big driver uses 3MiB, the > > > tree based allocator can allocate 4MiB for it, and use the rest 1MiB for > > > smaller allocations. > > We're not talking about a bitmap based solution for >= PAGE_SIZE > allocations, the alternative is a buddy allocator - so no searching, > just per power-of-two freelists. > > > > > Missed one: > > > > Third, bitmap based solution requires a "size" parameter in free(). It is an > > overhead for the user. Tree based solution doesn't have this issue. > > No, we can recover the size of the allocation via compound_order() - > hasn't historically been done for alloc_pages() allocations to avoid > setting up the state in each page for compound head/tail, but it perhaps > should be (and is with folios, which we've generally been switching to). If we use compound_order(), we will round up to power of 2 for all allocations. Does this mean we will use 4MiB for a 2.1MiB allocation? Thanks, Song