On 11/4/24 21:14, Pavel Begunkov wrote:
On 11/4/24 19:54, Mina Almasry wrote:
On Fri, Nov 1, 2024 at 2:09 PM Pavel Begunkov <asml.silence@xxxxxxxxx> wrote:
...
If you've tested the generic code paths to be performance deficient
and your recycling is indeed better, you could improve the page_pool
to pull netmems when it needs to like you're doing here, but in a
generic way that applies to the page allocator and other providers.
Not a one-off implementation that only applies to your provider.
If I read it right, you're saying you need to improve devmem TCP
instead of doing an io_uring API, just like you indirectly declared
in the very beginning a couple of weeks ago. Again, if you're
against having an io_uring user API in general or against some
particular aspects of the API, then please state it clearly. If not,
I can leave the idea to you to entertain once it's merged.
On top of it, that wouldn't make sense for the normal page pool path,
it already pushes pages via a ring (ptr_ring + caches) from one
context to another. The difference is that buffers with these zero
copy interfaces make an extra stop in the user space, from where
we directly push into the page pool, just like you can directly push
via ptr_ring when you're already in the kernel, even though it
requires more logic to handle untrusted user space.
It only makes for zero copy providers, and I do remember you was
suggested to take the same approach, I think it was Stan, but since
it didn't materialise I assume it's not of interest to devmem TCP.
--
Pavel Begunkov