Re: io_uring-only sendmsg + recvmsg zerocopy

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 11/11/2020 16:49, Victor Stewart wrote:
> On Wed, Nov 11, 2020 at 1:00 AM Pavel Begunkov <asml.silence@xxxxxxxxx> wrote:
>> On 11/11/2020 00:07, Victor Stewart wrote:
>>> On Tue, Nov 10, 2020 at 11:26 PM Pavel Begunkov <asml.silence@xxxxxxxxx> wrote:
>>>>> NIC ACKs, instead of finding the socket's error queue and putting the
>>>>> completion there like MSG_ZEROCOPY, the kernel would find the io_uring
>>>>> instance the socket is registered to and call into an io_uring
>>>>> sendmsg_zerocopy_completion function. Then the cqe would get pushed
>>>>> onto the completion queue.>
>>>>> the "recvmsg zerocopy" is straight forward enough. mimicking
>>>>> TCP_ZEROCOPY_RECEIVE, i'll go into specifics next time.
>>>>
>>>> Receive side is inherently messed up. IIRC, TCP_ZEROCOPY_RECEIVE just
>>>> maps skbuffs into userspace, and in general unless there is a better
>>>> suited protocol (e.g. infiniband with richier src/dst tagging) or a very
>>>> very smart NIC, "true zerocopy" is not possible without breaking
>>>> multiplexing.
>>>>
>>>> For registered buffers you still need to copy skbuff, at least because
>>>> of security implications.
>>>
>>> we can actually just force those buffers to be mmap-ed, and then when
>>> packets arrive use vm_insert_pin or remap_pfn_range to change the
>>> physical pages backing the virtual memory pages submmited for reading
>>> via msg_iov. so it's transparent to userspace but still zerocopy.
>>> (might require the user to notify io_uring when reading is
>>> completed... but no matter).
>>
>> Yes, with io_uring zerocopy-recv may be done better than
>> TCP_ZEROCOPY_RECEIVE but
>> 1) it's still a remap. Yes, zerocopy, but not ideal
>> 2) won't work with registered buffers, which is basically a set
>> of pinned pages that have a userspace mapping. After such remap
>> that mapping wouldn't be in sync and that gets messy.
> 
> well unless we can alleviate all copies, then there isn’t any point
> because it isn’t zerocopy.
> 
> so in my server, i have a ceiling on the number of clients,
> preallocate them, and mmap anonymous noreserve read + write buffers
> for each.
> 
> so say, 150,000 clients x (2MB * 2). which is 585GB. way more than the
> physical memory of my machine. (and have 10 instance of it per
> machine, so ~6TB lol). but at any one time probably 0.01% of that
> memory is in usage. and i just MADV_COLD the pages after consumption.
> 
> this provides a persistent “vmem contiguous” stream buffer per client.
> which has a litany of benefits. but if we persistently pin pages, this
> ceases to work, because pin pages require persistent physical memory
> backing pages.
> 
> But on the send side, if you don’t pin persistently, you’d have to pin
> on demand, which costs more than it’s worth for sends less than ~10KB.

having it non-contiguous and do round-robin IMHO would be a better shot

> And I guess there’s no way to avoid pinning and maintain kernel
> integrity. Maybe we could erase those userspace -> physical page
> mappings, then recreate them once the operation completes, but 1) that
> would require page aligned sends so that you could keep writing and
> sending while you waited for completions and 2) beyond being
> nonstandard and possibly unsafe, who says that would even cost less
> than pinning, definitely costs something. Might cost more because
> you’d have to get locks to the page table?
> 
> So essentially on the send side the only way to zerocopy for free is
> to persistently pin (and give up my per client stream buffers).
> 
> On the receive side actually the only way to realistically do zerocopy
> is to somehow pin a NIC RX queue to a process, and then persistently
> map the queue into the process’s memory as read only. That’s a
> security absurdly in the general case, but it could be root only
> usage. Then you’d recvmsg with a NULL msg_iov[0].iov_base, and have
> the packet buffer location and length written in. Might require driver
> buy-in, so might be impractical, but unsure.

https://blogs.oracle.com/linux/zero-copy-networking-in-uek6
scroll to AF_XDP

> 
> Otherwise the only option is an even worse nightmare how
> TCP_ZEROCOPY_RECEIVE works, and ridiculously impractical for general
> purpose…

Well, that's not so bad, API with io_uring might be much better, but
still would require unmap. However, depending on a use case overhead
for small packets and/or shared b/w many thread mm can potentially be
a deal breaker.

> “Mapping of memory into a process's address space is done on a
> per-page granularity; there is no way to map a fraction of a page. So
> inbound network data must be both page-aligned and page-sized when it
> ends up in the receive buffer, or it will not be possible to map it
> into user space. Alignment can be a bit tricky because the packets
> coming out of the interface start with the protocol headers, not the
> data the receiving process is interested in. It is the data that must
> be aligned, not the headers. Achieving this alignment is possible, but
> it requires cooperation from the network interface

should support scatter-gather in other words

> 
> It is also necessary to ensure that the data arrives in chunks that
> are a multiple of the system's page size, or partial pages of data
> will result. That can be done by setting the maximum transfer unit
> (MTU) size properly on the interface. That, in turn, can require
> knowledge of exactly what the incoming packets will look like; in a
> test program posted with the patch set, Dumazet sets the MTU to
> 61,512. That turns out to be space for fifteen 4096-byte pages of
> data, plus 40 bytes for the IPv6 header and 32 bytes for the TCP
> header.”
> 
> https://lwn.net/Articles/752188/
> 
> Either receive case also makes my persistent per client stream buffer
> zerocopy impossible lol.

it depends

> 
> in short, zerocopy sendmsg with persistently pinned buffers is
> definitely possible and we should do that. (I'll just make it work on
> my end).
> 
> recvmsg i'll have to do more research into the practicality of what I
> proposed above.

1. NIC is smart enough and can locate the end (userspace) buffer and
DMA there directly. That requires parsing TCP/UDP headers, etc., or
having a more versatile API like infiniband. + extra NIC features.

2. map skbufs into the userspace as TCP_ZEROCOPY_RECEIVE does.

-- 
Pavel Begunkov



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux