On Fri, Mar 12, 2021 at 12:46:09PM +0100, Jesper Dangaard Brouer wrote: > In my page_pool patch I'm bulk allocating 64 pages. I wanted to ask if > this is too much? (PP_ALLOC_CACHE_REFILL=64). > > The mlx5 driver have a while loop for allocation 64 pages, which it > used in this case, that is why 64 is chosen. If we choose a lower > bulk number, then the bulk-alloc will just be called more times. The thing about batching is that smaller batches are often better. Let's suppose you need to allocate 100 pages for something, and the page allocator takes up 90% of your latency budget. Batching just ten pages at a time is going to reduce the overhead to 9%. Going to 64 pages reduces the overhead from 9% to 2% -- maybe that's important, but possibly not. > The result of the API is to deliver pages as a double-linked list via > LRU (page->lru member). If you are planning to use llist, then how to > handle this API change later? > > Have you notice that the two users store the struct-page pointers in an > array? We could have the caller provide the array to store struct-page > pointers, like we do with kmem_cache_alloc_bulk API. My preference would be for a pagevec. That does limit you to 15 pages per call [1], but I do think that might be enough. And the overhead of manipulating a linked list isn't free. [1] patches exist to increase this, because it turns out that 15 may not be enough for all systems! but it would limit to 255 as an absolute hard cap.