Re: [RFC PATCH 00/10] Device Memory TCP

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jul 18, 2023 at 3:45 PM Jakub Kicinski <kuba@xxxxxxxxxx> wrote:
>
> On Tue, 18 Jul 2023 16:35:17 -0600 David Ahern wrote:
> > I do not see how 1 RSS context (or more specifically a h/w Rx queue) can
> > be used properly with memory from different processes (or dma-buf
> > references).

Right, my experience with dma-bufs from GPUs are that they're
allocated from the userspace and owned by the process that allocated
the backing GPU memory and generated the dma-buf from it. I.e., we're
limited to 1 dma-buf per RX queue. If we enable binding multiple
dma-bufs to the same RX queue, we have a problem, because AFAIU the
NIC can't decide which dma-buf to put the packet into (it hasn't
parsed the packet's destination yet).

> > When the process dies, that memory needs to be flushed from
> > the H/W queues. Queues with interlaced submissions make that more
> > complicated.
>

When the process dies, do we really want to flush the memory from the
hardware queues? The drivers I looked at don't seem to have a function
to flush the rx queues alone, they usually do an entire driver reset
to achieve that. Not sure if that's just convenience or there is some
technical limitation there. Do we really want  to trigger a driver
reset at the event a userspace process crashes?

> Agreed, one process, one control path socket.
>
> FWIW the rtnetlink use of netlink is very basic. genetlink already has
> some infra which allows associate state with a user socket and cleaning
> it up when the socket gets closed. This needs some improvements. A bit
> of a chicken and egg problem, I can't make the improvements until there
> are families making use of it, and nobody will make use of it until
> it's in tree... But the basics are already in place and I can help with
> building it out.
>

I had this approach in mind (which doesn't need netlink improvements)
for the next POC. It's mostly inspired by the comments from the cover
letter of Jakub's memory-provider RFC, if I understood it correctly.
I'm sure there's going to be some iteration, but roughly:

1. A netlink CAP_NET_ADMIN API which binds the dma-buf to any number
of rx queues on 1 NIC. It will do the dma_buf_attach() and
dma_buf_map_attachment() and leave some indicator in the struct
net_device to tell the NIC that it's bound to a dma-buf. The actual
binding doesn't actuate until the next driver reset. The API, I guess,
can cause a driver reset (or just a refill of the rx queues, if you
think that's feasible) as well to streamline things a bit. The API
returns a file handle to the user representing that binding.

2. On the driver reset, the driver notices that its struct net_device
is bound to a dma-buf, and sets up the dma-buf memory-provider instead
of the basic one which provides host memory.

3. The user can close the file handle received in #1 to unbind the
dma-buf from the rx queues. Or if the user crashes, the kernel closes
the handle for us. The unbind doesn't take effect until the next
flushing or rx queues, or the next driver reset (not sure the former
is feasible).

4. The dma-buf memory provider keeps the dma buf mapping alive until
the next driver reset, where all the dma-buf slices are freed, and the
dma buf attachment mapping can be unmapped.

I'm thinking the user sets up RSS and flow steering outside this API
using existing ethtool APIs, but things can be streamlined a bit by
doing some of these RSS/flow steering steps in cohesion with the
dma-buf binding/unbinding. The complication with setting up flow
steering in cohesion with dma-buf bind unbind is that the application
may start more connections after the bind, and it will need to install
flow steering rules for those too, and use the ethtool api for that.
May as well use the ethtool apis for all of it...?

>From Jakub and David's comments it sounds (if I understood correctly),
you'd like to tie the dma-buf bind/unbind functions to the lifetime of
a netlink socket, rather than a struct file like I was thinking. That
does sound cleaner, but I'm not sure how. Can you link me to any
existing code examples? Or rough pointers to any existing code?

> > I guess the devil is in the details; I look forward to the evolution of
> > the patches.
>
> +1



-- 
Thanks,
Mina




[Index of Archives]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Share Photos]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Samba]     [Device Mapper]

  Powered by Linux