Re: [PATCH net-next 00/13] Add mlx5 subfunction support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 2020-11-20 at 11:04 -0800, Samudrala, Sridhar wrote:
> 
> On 11/20/2020 9:58 AM, Alexander Duyck wrote:
> > On Thu, Nov 19, 2020 at 5:29 PM Jakub Kicinski <kuba@xxxxxxxxxx>
> > wrote:
> > > On Wed, 18 Nov 2020 21:35:29 -0700 David Ahern wrote:
> > > > On 11/18/20 7:14 PM, Jakub Kicinski wrote:
> > > > > On Tue, 17 Nov 2020 14:49:54 -0400 Jason Gunthorpe wrote:
> > > > > > On Tue, Nov 17, 2020 at 09:11:20AM -0800, Jakub Kicinski
> > > > > > wrote:
> > > > > > 
> > > > > > > > Just to refresh all our memory, we discussed and
> > > > > > > > settled on the flow
> > > > > > > > in [2]; RFC [1] followed this discussion.
> > > > > > > > 
> > > > > > > > vdpa tool of [3] can add one or more vdpa device(s) on
> > > > > > > > top of already
> > > > > > > > spawned PF, VF, SF device.
> > > > > > > 
> > > > > > > Nack for the networking part of that. It'd basically be
> > > > > > > VMDq.
> > > > > > 
> > > > > > What are you NAK'ing?
> > > > > 
> > > > > Spawning multiple netdevs from one device by slicing up its
> > > > > queues.
> > > > 
> > > > Why do you object to that? Slicing up h/w resources for virtual
> > > > what
> > > > ever has been common practice for a long time.
> > > 
> > > My memory of the VMDq debate is hazy, let me rope in Alex into
> > > this.
> > > I believe the argument was that we should offload software
> > > constructs,
> > > not create HW-specific APIs which depend on HW availability and
> > > implementation. So the path we took was offloading macvlan.
> > 
> > I think it somewhat depends on the type of interface we are talking
> > about. What we were wanting to avoid was drivers spawning their own
> > unique VMDq netdevs and each having a different way of doing it. 

Agreed, but SF netdevs are not a VMDq netdevs, they are avaiable in the
switchdev model where they correspond to a full blown port (HW domain).

> > The
> > approach Intel went with was to use a MACVLAN offload to approach
> > it.
> > Although I would imagine many would argue the approach is somewhat
> > dated and limiting since you cannot do many offloads on a MACVLAN
> > interface.
> 
> Yes. We talked about this at netdev 0x14 and the limitations of
> macvlan 
> based offloads.
> https://netdevconf.info/0x14/session.html?talk-hardware-acceleration-of-container-networking-interfaces
> 
> Subfunction seems to be a good model to expose VMDq VSI or SIOV ADI
> as a 

Exactly, Subfunctions is the most generic model to overcome any SW
model limitations e.g macvtap offload, all HW vendors are already
creating netdevs on a given PF/VF .. all we need is to model the SF and
all the rest is the same! most likely every thing else comes for free
like in the mlx5 model where the netdev/rmda interfaces are abstracted
from the underlying HW, same netdev loads on a PF/VF/SF or even an
embedded function !


> netdev for kernel containers. AF_XDP ZC in a container is one of the 
> usecase this would address. Today we have to pass the entire PF/VF to
> a 
> container to do AF_XDP.
> 

this will be supported out of the box for free with SFs.

> Looks like the current model is to create a subfunction of a
> specific 
> type on auxiliary bus, do some configuration to assign resources and 
> then activate the subfunction.
> 
> > With the VDPA case I believe there is a set of predefined virtio
> > devices that are being emulated and presented so it isn't as if
> > they
> > are creating a totally new interface for this.
> > 
> > What I would be interested in seeing is if there are any other
> > vendors
> > that have reviewed this and sign off on this approach. What we
> > don't
> > want to see is Nivida/Mellanox do this one way, then Broadcom or
> > Intel
> > come along later and have yet another way of doing this. We need an
> > interface and feature set that will work for everyone in terms of
> > how
> > this will look going forward.

Well, the vdpa interface was created by the virtio community and
especially redhat, i am not sure mellanox were even involved in the
initial development stages :-)

anyway historically speaking vDPA was originally created for DPDK, but
same API applies to device drivers who can deliver the same set of
queues and API while bypassing the whole DPDK stack, enters Kernel vDPA
which was created to overcome some of the userspace limitations and
complexity and to leverage some of the kernel great feature such as
eBPF.

https://www.redhat.com/en/blog/introduction-vdpa-kernel-framework








[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Photo]     [Yosemite News]     [Yosemite Photos]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux