Hi Kent, On Thu, May 18, 2023 at 01:23:56PM -0400, Kent Overstreet wrote: > On Thu, May 18, 2023 at 10:00:39AM -0700, Song Liu wrote: > > On Thu, May 18, 2023 at 9:48 AM Kent Overstreet > > <kent.overstreet@xxxxxxxxx> wrote: > > > > > > On Thu, May 18, 2023 at 09:33:20AM -0700, Song Liu wrote: > > > > I am working on patches based on the discussion in [1]. I am planning to > > > > send v1 for review in a week or so. > > > > > > Hey Song, I was reviewing that thread too, > > > > > > Are you taking a different approach based on Thomas's feedback? I think > > > he had some fair points in that thread. > > > > Yes, the API is based on Thomas's suggestion, like 90% from the discussions. > > > > > > > > My own feeling is that the buddy allocator is our tool for allocating > > > larger variable sized physically contiguous allocations, so I'd like to > > > see something based on that - I think we could do a hybrid buddy/slab > > > allocator approach, like we have for regular memory allocations. > > > > I am planning to implement the allocator based on this (reuse > > vmap_area logic): > > Ah, you're still doing vmap_area approach. > > Mike's approach looks like it'll be _much_ lighter weight and higher > performance, to me. vmalloc is known to be slow compared to the buddy > allocator, and with Mike's approach we're only modifying mappings once > per 2 MB chunk. > > I don't see anything in your code for sub-page sized allocations too, so > perhaps I should keep going with my slab allocator. Your allocator implicitly relies on vmalloc because of module_alloc ;-) What I was thinking is that we can replace module_alloc() calls in your allocator with something based on my unmapped_alloc(). If we make the part that refills the cache also take care of creating the mapping in the module address space, that should cover everything. > Could you share your thoughts on your approach vs. Mike's? I'm newer to > this area of the code than you two so maybe there's an angle I've missed > :) > -- Sincerely yours, Mike.