Re: [RFC PATCH 00/10] Device Memory TCP

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Jul 16, 2023 at 7:41 PM Andy Lutomirski <luto@xxxxxxxxxx> wrote:
>
> On 7/10/23 15:32, Mina Almasry wrote:
> > * TL;DR:
> >
> > Device memory TCP (devmem TCP) is a proposal for transferring data to and/or
> > from device memory efficiently, without bouncing the data to a host memory
> > buffer.
>
> (I'm writing this as someone who might plausibly use this mechanism, but
> I don't think I'm very likely to end up working on the kernel side,
> unless I somehow feel extremely inspired to implement it for i40e.)
>
> I looked at these patches and the GVE tree, and I'm trying to wrap my
> head around the data path.  As I understand it, for RX:
>
> 1. The GVE driver notices that the queue is programmed to use devmem,
> and it programs the NIC to copy packet payloads to the devmem that has
> been programmed.
> 2. The NIC receives the packet and copies the header to kernel memory
> and the payload to dma-buf memory.
> 3. The kernel tells userspace where in the dma-buf the data is.
> 4. Userspace does something with the data.
> 5. Userspace does DONTNEED to recycle the memory and make it available
> for new received packets.
>
> Did I get this right?
>

Sorry for the late reply. I'm a bit buried working on the follow up to
this proposal: exploring using dma-bufs without pages.

Yes, this is completely correct.

> This seems a bit awkward if there's any chance that packets not intended
> for the target device end up in the rxq.
>

It does a bit. What happens in practice is that we use RSS to steer
general traffic away from the devmem queues, and we use flow steering
to steer specific flows to devem queues.

In the case where the RSS/flow steering configuration is done
incorrectly, the user would call recvmsg() on a devmem skb and if they
haven't specified the MSG_SOCK_DEVMEM flag they'd get an error.

> I'm wondering if a more capable if somewhat higher latency model could
> work where the NIC stores received packets in its own device memory.
> Then userspace (or the kernel or a driver or whatever) could initiate a
> separate DMA from the NIC to the final target *after* reading the
> headers.  Can the hardware support this?
>

Not that I know of. I guess Jakub also responded with the same.

> Another way of putting this is: steering received data to a specific
> device based on the *receive queue* forces the logic selecting a
> destination device to be the same as the logic selecting the queue.  RX
> steering logic is pretty limited on most hardware (as far as I know --
> certainly I've never had much luck doing anything especially intelligent
> with RX flow steering, and I've tried on a couple of different brands of
> supposedly fancy NICs).  But Linux has very nice capabilities to direct
> packets, in software, to where they are supposed to go, and it would be
> nice if all that logic could just work, scalably, with device memory.
> If Linux could examine headers *before* the payload gets DMAed to
> wherever it goes, I think this could plausibly work quite nicely.  One
> could even have an easy-to-use interface in which one directs a *socket*
> to a PCIe device.  I expect, although I've never looked at the
> datasheets, that the kernel could even efficiently make rx decisions
> based on data in device memory on upcoming CXL NICs where device memory
> could participate in the host cache hierarchy.
>
> My real ulterior motive is that I think it would be great to use an
> ability like this for DPDK-like uses.  Wouldn't it be nifty if I could
> open a normal TCP socket, then, after it's open, ask the kernel to
> kindly DMA the results directly to my application memory (via udmabuf,
> perhaps)?  Or have a whole VLAN or macvlan get directed to a userspace
> queue, etc?
>
>
> It also seems a bit odd to me that the binding from rxq to dma-buf is
> established by programming the dma-buf.

That is specific to this proposal, and will likely be very different
in future ones. I thought the dma-buf pages approach was extensible
and the uapi belonged somewhere in dma-buf. Clearly not. The next
proposal, I think, will program the rxq via some net uapi and will
take the dma-buf as input. Probably some netlink api (not sure if
ethtool family or otherwise). I'm working out details of this
non-paged networking first.

> This makes the security model
> (and the mental model) awkward -- this binding is a setting on the
> *queue*, not the dma-buf, and in a containerized or privilege-separated
> system, a process could have enough privilege to make a dma-buf
> somewhere but not have any privileges on the NIC.  (And may not even
> have the NIC present in its network namespace!)
>
> --Andy



--
Thanks,
Mina




[Index of Archives]     [Linux Input]     [Video for Linux]     [Gstreamer Embedded]     [Mplayer Users]     [Linux USB Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [Yosemite Backpacking]

  Powered by Linux