Re: [RFC PATCH v3 07/20] io_uring: add interface queue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 12/21/23 17:57, Willem de Bruijn wrote:
David Wei wrote:
From: David Wei <davidhwei@xxxxxxxx>

This patch introduces a new object in io_uring called an interface queue
(ifq) which contains:

* A pool region allocated by userspace and registered w/ io_uring where
   Rx data is written to.
* A net device and one specific Rx queue in it that will be configured
   for ZC Rx.
* A pair of shared ringbuffers w/ userspace, dubbed registered buf
   (rbuf) rings. Each entry contains a pool region id and an offset + len
   within that region. The kernel writes entries into the completion ring
   to tell userspace where RX data is relative to the start of a region.
   Userspace writes entries into the refill ring to tell the kernel when
   it is done with the data.

For now, each io_uring instance has a single ifq, and each ifq has a
single pool region associated with one Rx queue.

Add a new opcode to io_uring_register that sets up an ifq. Size and
offsets of shared ringbuffers are returned to userspace for it to mmap.
The implementation will be added in a later patch.

Signed-off-by: David Wei <dw@xxxxxxxxxxx>

This is quite similar to AF_XDP, of course. Is it at all possible to
reuse all or some of that? If not, why not?

Let me rather ask what do you have in mind for reuse? I'm not too
intimately familiar with xdp, but I don't see what we can take.

Queue formats will be different, there won't be a separate CQ
for zc all they will lend in the main io_uring CQ in next revisions.
io_uring also supports multiple sockets per zc ifq and other quirks
reflected in the uapi.

Receive has to work with generic sockets and skbs if we want
to be able to reuse the protocol stack. Queue allocation and
mapping is similar but that one thing that should be bound to
the API (i.e. io_uring vs af xdp) together with locking and
synchronisation. Wakeups are different as well.

And IIUC AF_XDP is still operates with raw packets quite early
in the stack, while io_uring completes from a syscall, that
would definitely make sync diverging a lot.

I don't see many opportunities here.

As a side effect, unification would also show a path of moving AF_XDP
from its custom allocator to the page_pool infra.

I assume it's about xsk_buff_alloc() and likes of it. I'm lacking
here, I it's much better to ask XDP guys what they think about
moving to pp, whether it's needed, etc. And if so, it'd likely
be easier to base it on raw page pool providers api than the io_uring
provider implementation, probably having some common helpers if
things come to that.

Related: what is the story wrt the process crashing while user memory
is posted to the NIC or present in the kernel stack.

Buffers are pinned by io_uring. If the process crashes closing the
ring, io_uring will release the pp provider and wait for all buffer
to come back before unpinning pages and freeing the rest. I.e.
it's not going to unpin before pp's ->destroy is called.

SO_DEVMEM already demonstrates zerocopy into user buffers using usdma.
To a certain extent that and asyncronous I/O with iouring are two
independent goals. SO_DEVMEM imposes limitations on the stack because
it might hold opaque device mem. That is too strong for this case.

Basing it onto ppiov simplifies refcounting a lot, with that we
don't need any dirty hacks nor adding any extra changes in the stack,
and I think it's aligned with the net stack goals. What I think
we can do on top is allowing ppiov's to optionally have pages
(via a callback ->get_page), and use it it in those rare cases
when someone has to peek at the payload.

But for this iouring provider, is there anything ioring specific about
it beyond being user memory? If not, maybe just call it a umem
provider, and anticipate it being usable for AF_XDP in the future too?

Queue formats with a set of features, synchronisation, mostly
answered above, but I also think it should as easy to just have
a separate provider and reuse some code later if there is anything
to reuse.

Besides delivery up to the intended socket, packets may also end up
in other code paths, such as packet sockets or forwarding. All of
this is simpler with userspace backed buffers than with device mem.
But good to call out explicitly how this is handled. MSG_ZEROCOPY
makes a deep packet copy in unexpected code paths, for instance. To
avoid indefinite latency to buffer reclaim.

Yeah, that's concerning, I intend to add something for the sockets
we used, but there is nothing for truly unexpected paths. How devmem
handles it?

It's probably not a huge worry for now, I expect killing the
task/sockets should resolve dependencies, but would be great to find
such scenarios. I'd appreciate any pointers if you have some in mind.

--
Pavel Begunkov




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux