On 3/7/2025 10:15 PM, Toke Høiland-Jørgensen wrote: ...
You are making this incredibly complicated. You've basically implemented a whole new slab allocator for those page_pool_item objects, and you're tracking every page handed out by the page pool instead of just the ones that are DMA-mapped. None of this is needed.
> > I took a stab at implementing the xarray-based tracking first suggested
by Mina[0]:
I did discuss Mina' suggestion with Ilias below in case you didn't notice: https://lore.kernel.org/all/0ef315df-e8e9-41e8-9ba8-dcb69492c616@xxxxxxxxxx/ Anyway, It is great that you take the effort to actually implement the idea to have some more concrete comparison here.
https://git.kernel.org/toke/c/e87e0edf9520 And, well, it's 50 lines of extra code, none of which are in the fast path.
I wonder what is the overhead for the xarray idea regarding the time_bench_page_pool03_slow() testcase before we begin to discuss if xarray idea is indeed possible.
Jesper has kindly helped with testing that it works for normal packet processing, but I haven't yet verified that it resolves the original crash. Will post the patch to the list once I have verified this (help welcome!).
RFC seems like a good way to show and discuss the basic idea. I only took a glance at git code above, it seems reusing the _pp_mapping_pad for pp_dma_index seems like a wrong direction as mentioned in discussion with Ilias above as the field might be used when a page is mmap'ed to user space, and reusing that field in 'struct page' seems to disable the tcp_zerocopy feature, see the below commit from Eric: https://github.com/torvalds/linux/commit/577e4432f3ac810049cb7e6b71f4d96ec7c6e894 Also, I am not sure if a page_pool owned page can be spliced into the fs subsystem yet, but if it does, I am not sure how is reusing the page->mapping possible if that page is called in __filemap_add_folio()? https://elixir.bootlin.com/linux/v6.14-rc5/source/mm/filemap.c#L882
-Toke [0] https://lore.kernel.org/all/CAHS8izPg7B5DwKfSuzz-iOop_YRbk3Sd6Y4rX7KBG9DcVJcyWg@xxxxxxxxxxxxxx/