Re: Project: Improving the PCP allocator

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 22 Jan 2024, Matthew Wilcox wrote:

When we have memdescs, allocating a folio from the buddy is a two step
process.  First we allocate the struct folio from slab, then we ask the
buddy allocator for 2^n pages, each of which gets its memdesc set to
point to this folio.  It'll be similar for other memory descriptors,
but let's keep it simple and just talk about folios for now.

I need to catch up on memdescs. One of the key issues may be fragmentation that occurs during alloc / free of folios of different sizes.

Maybe we could use an approach similar to what the slab allocator uses to defrag. Allocate larger folios/pages and then break out sub folios/sizes/components until the page is full and recycle any frees of components in that page before going to the next large page.

With that we end up with a list of per cpu huge pages or so that the page allocator will serve from similar to the cpu partial lists in SLUB.

Once the huge page is used up then the page allocator needs to move on to a huge page that already has a lot of recent frees of smaller fragments. So something like a partial lists can exist also in the page allocator that is basically sorted by available space within each huge page.

There is the additional issue of different sizes to break out so it may not be as easy as in the SLUB allocator because different sizes are in one huge page.

Basically this is a move from SLAB style object management (caching large lists of small objects without regard to locality which increases fragmentation) to a combination of spatial considerations as well as list of large frames. I think this is necessary in order to keep memory as defragmented as possible.

I think this could be a huge saving.  Consider allocating an order-9 PMD
sized THP.  Today we initialise compound_head in each of the 511 tail
pages.  Since a page is 64 bytes, we touch 32kB of memory!  That's 2/3 of
my CPU's L1 D$, so it's just pushed out a good chunk of my working set.
And it's all dirty, so it has to get written back.

Right.

We still need to distinguish between specifically folios (which
need the folio_prep_large_rmappable() call on allocation and
folio_undo_large_rmappable() on free) and other compound allocations which
do not need or want this, but that's touching one/two extra cachelines,
not 511.

Do we have a volunteer?

Maybe. I have to think about this but since I got my hands dirty years ago on the PCP logic I may qualify.

Need to get my head around the details and see where this could go.





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux