Hi Jesper, On Mon, Dec 05, 2022 at 04:34:10PM +0100, Jesper Dangaard Brouer wrote: > > On 30/11/2022 23.07, Matthew Wilcox (Oracle) wrote: > > The MM subsystem is trying to reduce struct page to a single pointer. > > The first step towards that is splitting struct page by its individual > > users, as has already been done with folio and slab. This attempt chooses > > 'netmem' as a name, but I am not even slightly committed to that name, > > and will happily use another. > > I've not been able to come-up with a better name, so I'm okay with > 'netmem'. Others are of-cause free to bikesheet this ;-) Same here. But if anyone has a better name please shout. > > > There are some relatively significant reductions in kernel text > > size from these changes. I'm not qualified to judge how they > > might affect performance, but every call to put_page() includes > > a call to compound_head(), which is now rather more complex > > than it once was (at least in a distro config which enables > > CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP). > > > > I have a micro-benchmark [1][2], that I want to run on this patchset. > Reducing the asm code 'text' size is less likely to improve a > microbenchmark. The 100Gbit mlx5 driver uses page_pool, so perhaps I can > run a packet benchmark that can show the (expected) performance improvement. > > [1] https://github.com/netoptimizer/prototype-kernel/blob/master/kernel/lib/bench_page_pool_simple.c > [2] https://github.com/netoptimizer/prototype-kernel/blob/master/kernel/lib/bench_page_pool_cross_cpu.c > If you could give it a spin it would be great. I did apply the patchset and was running fine on my Arm box. I was about to run these tests, but then I remembered that this only works for x86. I don't have any cards supported by page pool around. > > I've only converted one user of the page_pool APIs to use the new netmem > > APIs, all the others continue to use the page based ones. > > > > I guess we/netdev-devels need to update the NIC drivers that uses page_pool. > [...] Regards /Ilias