RE: [PATCH net-next 00/19] Mellanox, mlx5 sub function support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Jason,

+ Greg

> -----Original Message-----
> From: Jason Gunthorpe <jgg@xxxxxxxx>
> Sent: Friday, November 8, 2019 8:41 AM
> To: Jiri Pirko <jiri@xxxxxxxxxxx>; Ertman@xxxxxxxx; David M
> <david.m.ertman@xxxxxxxxx>; gregkh@xxxxxxxxxxxxxxxxxxx
> Cc: Jakub Kicinski <jakub.kicinski@xxxxxxxxxxxxx>; Parav Pandit
> <parav@xxxxxxxxxxxx>; alex.williamson@xxxxxxxxxx;
> davem@xxxxxxxxxxxxx; kvm@xxxxxxxxxxxxxxx; netdev@xxxxxxxxxxxxxxx;
> Saeed Mahameed <saeedm@xxxxxxxxxxxx>; kwankhede@xxxxxxxxxx;
> leon@xxxxxxxxxx; cohuck@xxxxxxxxxx; Jiri Pirko <jiri@xxxxxxxxxxxx>; linux-
> rdma@xxxxxxxxxxxxxxx; Or Gerlitz <gerlitz.or@xxxxxxxxx>
> Subject: Re: [PATCH net-next 00/19] Mellanox, mlx5 sub function support
> 
> On Fri, Nov 08, 2019 at 01:12:33PM +0100, Jiri Pirko wrote:
> > Thu, Nov 07, 2019 at 09:32:34PM CET, jakub.kicinski@xxxxxxxxxxxxx
> wrote:
> > >On Thu,  7 Nov 2019 10:04:48 -0600, Parav Pandit wrote:
> > >> Mellanox sub function capability allows users to create several
> > >> hundreds of networking and/or rdma devices without depending on PCI
> SR-IOV support.
> > >
> > >You call the new port type "sub function" but the devlink port
> > >flavour is mdev.
> > >
> > >As I'm sure you remember you nacked my patches exposing NFP's PCI sub
> > >functions which are just regions of the BAR without any mdev
> > >capability. Am I in the clear to repost those now? Jiri?
> >
> > Well question is, if it makes sense to have SFs without having them as
> > mdev? I mean, we discussed the modelling thoroughtly and eventually we
> > realized that in order to model this correctly, we need SFs on "a bus".
> > Originally we were thinking about custom bus, but mdev is already
> > there to handle this.
> 
> Did anyone consult Greg on this?
> 
Back when I started with subdev bus in March, we consulted Greg and mdev maintainers.
After which we settled on extending mdev for wider use case, more below.
It is extended for multiple users for example for virtio too in addition to vfio and mlx5_core.

> The new intel driver has been having a very similar discussion about how to
> model their 'multi function device' ie to bind RDMA and other drivers to a
> shared PCI function, and I think that discussion settled on adding a new bus?
> 
> Really these things are all very similar, it would be nice to have a clear
> methodology on how to use the device core if a single PCI device is split by
> software into multiple different functional units and attached to different
> driver instances.
> 
> Currently there is alot of hacking in this area.. And a consistent scheme
> might resolve the ugliness with the dma_ops wrappers.
> 
> We already have the 'mfd' stuff to support splitting platform devices, maybe
> we need to create a 'pci-mfd' to support splitting PCI devices?
> 
> I'm not really clear how mfd and mdev relate, I always thought mdev was
> strongly linked to vfio.
> 
Mdev at beginning was strongly linked to vfio, but as I mentioned above it is addressing more use case.

I observed that discussion, but was not sure of extending mdev further.

One way to do for Intel drivers to do is after series [9].
Where PCI driver says, MDEV_CLASS_ID_I40_FOO
RDMA driver mdev_register_driver(), matches on it and does the probe().

> At the very least if it is agreed mdev should be the vehicle here, then it
> should also be able to solve the netdev/rdma hookup problem too.
> 
> Jason

[9] https://patchwork.ozlabs.org/patch/1190425





[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux