On 10/28/24 12:11, Christoph Hellwig wrote:
On Thu, Oct 24, 2024 at 05:40:02PM +0100, Pavel Begunkov wrote:
On 10/24/24 17:06, Christoph Hellwig wrote:
On Thu, Oct 24, 2024 at 03:23:06PM +0100, Pavel Begunkov wrote:
That's not what this series does. It adds the new memory_provider_ops
set of hooks, with once implementation for dmabufs, and one for
io_uring zero copy.
First, it's not a _new_ abstraction over a buffer as you called it
before, the abstraction (net_iov) is already merged.
Umm, it is a new ops vector.
I don't understand what you mean. Callback?
struct memory_provider_ops. It's a method table or ops vetor, no
callbacks involved.
I see, the reply is about your phrase about additional memory
abstractions:
"... don't really need to build memory buffer abstraction over
memory buffer abstraction."
Then please go ahead and take a look at the patchset in question
and see how much of dmabuf handling is there comparing to pure
networking changes. The point that it's a new set of API and lots
of changes not related directly to dmabufs stand. dmabufs is useful
there as an abstraction there, but it's a very long stretch saying
that the series is all about it.
I did take a look, that's why I replied.
on an existing network specific abstraction, which are not restricted to
pages or anything specific in the long run, but the flow of which from
net stack to user and back is controlled by io_uring. If you worry about
abuse, io_uring can't even sanely initialise those buffers itself and
therefore asking the page pool code to do that.
No, I worry about trying to io_uring for not good reason. This
It sounds that the argument is that you just don't want any
io_uring APIs, I don't think you'd be able to help you with
that.
No, that's complete misinterpreting what I'm saying. Of course an
io_uring API is fine. But tying low-level implementation details to
to is not.
It works with low level concepts, i.e. private NIC queues, but it does
that through well established abstractions (page pool) already extended
for such cases. There is no directly going into a driver / hardware and
hard coding queue allocation, some memory injection or anything similar.
The user api has to embrace the hardware limitations, right, there is no
way around it without completely changing the approach and performance
and/or applicability. And queues as first class citizens is not a new
concept in general.
pre-cludes in-kernel uses which would be extremly useful for
Uses of what? devmem TCP is merged, I'm not removing it,
and the net_iov abstraction is in there, which can be potentially
be reused by other in-kernel users if that'd even make sense.
How when you are hardcoding io uring memory registrations instead
of making them a generic dmabuf? Which btw would also really help
If you mean internals, making up a dmabuf that has never existed in the
picture in the first place is not cleaner or easier in any way. If that
changes, e.g. there is more code to reuse in the future, we can unify it
then.
If that's about user api, you've just mentioned before that it can be
pages / user pointers. As to why it goes through io_uring, I explained
it before, but in short, it gives a better api for io_uring users, we
can avoid creating a yet another file (netlink socket) and keeping it
around, that way we don't need to synchronise with the nl socket and/or
trying to steal memory from it, and the devmem api is also too
monolithic for such purposes, so even that would need to change, i.e.
splitting queue and memory registration.
with pre-registering the memry with the iommu to get good performance
in IOMMU-enabled setups.
The page pool already does that just like it handles the normal
path without providers.
--
Pavel Begunkov