Re: [RFC PATCH 1/5] mm: intorduce __GFP_UNMAPPED and unmapped_alloc()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, May 18, 2023 at 11:47 AM Song Liu <song@xxxxxxxxxx> wrote:
>
> On Thu, May 18, 2023 at 10:24 AM Kent Overstreet
> <kent.overstreet@xxxxxxxxx> wrote:
> >
> > On Thu, May 18, 2023 at 10:00:39AM -0700, Song Liu wrote:
> > > On Thu, May 18, 2023 at 9:48 AM Kent Overstreet
> > > <kent.overstreet@xxxxxxxxx> wrote:
> > > >
> > > > On Thu, May 18, 2023 at 09:33:20AM -0700, Song Liu wrote:
> > > > > I am working on patches based on the discussion in [1]. I am planning to
> > > > > send v1 for review in a week or so.
> > > >
> > > > Hey Song, I was reviewing that thread too,
> > > >
> > > > Are you taking a different approach based on Thomas's feedback? I think
> > > > he had some fair points in that thread.
> > >
> > > Yes, the API is based on Thomas's suggestion, like 90% from the discussions.
> > >
> > > >
> > > > My own feeling is that the buddy allocator is our tool for allocating
> > > > larger variable sized physically contiguous allocations, so I'd like to
> > > > see something based on that - I think we could do a hybrid buddy/slab
> > > > allocator approach, like we have for regular memory allocations.
> > >
> > > I am planning to implement the allocator based on this (reuse
> > > vmap_area logic):
> >
> > Ah, you're still doing vmap_area approach.
> >
> > Mike's approach looks like it'll be _much_ lighter weight and higher
> > performance, to me. vmalloc is known to be slow compared to the buddy
> > allocator, and with Mike's approach we're only modifying mappings once
> > per 2 MB chunk.
> >
> > I don't see anything in your code for sub-page sized allocations too, so
> > perhaps I should keep going with my slab allocator.
>
> The vmap_area approach handles sub-page allocations. In 5/5 of set [2],
> we showed that multiple BPF programs share the same page with some
> kernel text (_etext).
>
> > Could you share your thoughts on your approach vs. Mike's? I'm newer to
> > this area of the code than you two so maybe there's an angle I've missed
> > :)
>
> AFAICT, tree based solution (vmap_area) is more efficient than bitmap
> based solution.
>
> First, for 2MiB page with 64B chunk size, we need a bitmap of
>      2MiB / 64B = 32k bit = 4k bytes
> While the tree based solution can adapt to the number of allocations within
> This 2MiB page. Also, searching a free range within 4kB of bitmap may
> actually be slower than searching in the tree.
>
> Second, bitmap based solution cannot handle > 2MiB allocation cleanly,
> while tree based solution can. For example, if a big driver uses 3MiB, the
> tree based allocator can allocate 4MiB for it, and use the rest 1MiB for
> smaller allocations.

 Missed one:

Third, bitmap based solution requires a "size" parameter in free(). It is an
overhead for the user. Tree based solution doesn't have this issue.

Thanks,
Song

>
> [2] https://lore.kernel.org/linux-mm/20221107223921.3451913-6-song@xxxxxxxxxx/





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux