On Thu, Oct 10, 2024 at 2:22 PM Pavel Begunkov <asml.silence@xxxxxxxxx> wrote: > > > page_pool. To make matters worse, the bypass is only there if the > > netmems are returned from io_uring, and not bypassed when the netmems > > are returned from driver/tcp stack. I'm guessing if you reused the > > page_pool recycling in the io_uring return path then it would remove > > the need for your provider to implement its own recycling for the > > io_uring return case. > > > > Is letting providers bypass and override the page_pool's recycling in > > some code paths OK? IMO, no. A maintainer will make the judgement call > > Mina, frankly, that's nonsense. If we extend the same logic, > devmem overrides page allocation rules with callbacks, devmem > overrides and violates page pool buffer lifetimes by extending > it to user space, devmem violates and overrides the page pool > object lifetime by binding buffers to sockets. And all of it > I'd rather name extends and enhances to fit in the devmem use > case. > > > and speak authoritatively here and I will follow, but I do think it's > > a (much) worse design. > > Sure, I have a completely opposite opinion, that's a much > better approach than returning through a syscall, but I will > agree with you that ultimately the maintainers will say if > that's acceptable for the networking or not. > Right, I'm not suggesting that you return the pages through a syscall. That will add syscall overhead when it's better not to have that especially in io_uring context. Devmem TCP needed a syscall because I couldn't figure out a non-syscall way with sockets for the userspace to tell the kernel that it's done with some netmems. You do not need to follow that at all. Sorry if I made it seem like so. However, I'm suggesting that when io_uring figures out that the userspace is done with a netmem, that you feed that netmem back to the pp, and utilize the pp's recycling, rather than adding your own recycling in the provider.