On Fri, 8 Nov 2019 16:12:53 -0400, Jason Gunthorpe wrote: > On Fri, Nov 08, 2019 at 11:12:38AM -0800, Jakub Kicinski wrote: > > On Fri, 8 Nov 2019 15:40:22 +0000, Parav Pandit wrote: > > > Mdev at beginning was strongly linked to vfio, but as I mentioned > > > above it is addressing more use case. > > > > > > I observed that discussion, but was not sure of extending mdev further. > > > > > > One way to do for Intel drivers to do is after series [9]. > > > Where PCI driver says, MDEV_CLASS_ID_I40_FOO > > > RDMA driver mdev_register_driver(), matches on it and does the probe(). > > > > Yup, FWIW to me the benefit of reusing mdevs for the Intel case vs > > muddying the purpose of mdevs is not a clear trade off. > > IMHO, mdev has amdev_parent_ops structure clearly intended to link it > to vfio, so using a mdev for something not related to vfio seems like > a poor choice. Yes, my suggestion to use mdev was entirely based on the premise that the purpose of this work is to get vfio working.. otherwise I'm unclear as to why we'd need a bus in the first place. If this is just for containers - we have macvlan offload for years now, with no need for a separate device. > I suppose this series is the start and we will eventually see the > mlx5's mdev_parent_ops filled in to support vfio - but *right now* > this looks identical to the problem most of the RDMA capable net > drivers have splitting into a 'core' and a 'function' On the RDMA/Intel front, would you mind explaining what the main motivation for the special buses is? I'm a little confurious. My understanding is MFD was created to help with cases where single device has multiple pieces of common IP in it. Do modern RDMA cards really share IP across generations? Is there a need to reload the drivers for the separate pieces (I wonder if the devlink reload doesn't belong to the device model :(). Or is it purely an abstraction and people like abstractions?