> -----Original Message----- > From: Parav Pandit <parav@xxxxxxxxxxxx> > Sent: Monday, November 18, 2019 8:40 PM > To: Parav Pandit <parav@xxxxxxxxxxxx>; Ertman, David M > <david.m.ertman@xxxxxxxxx>; Kirsher, Jeffrey T > <jeffrey.t.kirsher@xxxxxxxxx>; davem@xxxxxxxxxxxxx; > gregkh@xxxxxxxxxxxxxxxxxxx > Cc: netdev@xxxxxxxxxxxxxxx; linux-rdma@xxxxxxxxxxxxxxx; > nhorman@xxxxxxxxxx; sassmann@xxxxxxxxxx; jgg@xxxxxxxx; Patil, Kiran > <kiran.patil@xxxxxxxxx> > Subject: RE: [net-next v2 1/1] virtual-bus: Implementation of Virtual Bus > > Hi David, > > > Sent: Monday, November 18, 2019 10:32 PM > > To: Ertman, David M <david.m.ertman@xxxxxxxxx>; Kirsher, Jeffrey T > > <jeffrey.t.kirsher@xxxxxxxxx>; davem@xxxxxxxxxxxxx; > > gregkh@xxxxxxxxxxxxxxxxxxx > > Cc: netdev@xxxxxxxxxxxxxxx; linux-rdma@xxxxxxxxxxxxxxx; > > nhorman@xxxxxxxxxx; sassmann@xxxxxxxxxx; jgg@xxxxxxxx; Patil, Kiran > > <kiran.patil@xxxxxxxxx> > > Subject: RE: [net-next v2 1/1] virtual-bus: Implementation of Virtual > > Bus > > > > Hi David, > > > > > From: Ertman, David M <david.m.ertman@xxxxxxxxx> > > > Sent: Monday, November 18, 2019 9:59 PM > > > Subject: RE: [net-next v2 1/1] virtual-bus: Implementation of > > > Virtual Bus > > > > > > > -----Original Message----- > > > > From: Parav Pandit <parav@xxxxxxxxxxxx> > > > > Sent: Friday, November 15, 2019 3:26 PM > > > > To: Kirsher, Jeffrey T <jeffrey.t.kirsher@xxxxxxxxx>; > > > > davem@xxxxxxxxxxxxx; gregkh@xxxxxxxxxxxxxxxxxxx > > > > Cc: Ertman, David M <david.m.ertman@xxxxxxxxx>; > > > > netdev@xxxxxxxxxxxxxxx; linux-rdma@xxxxxxxxxxxxxxx; > > > > nhorman@xxxxxxxxxx; sassmann@xxxxxxxxxx; jgg@xxxxxxxx; Patil, > > > > Kiran <kiran.patil@xxxxxxxxx> > > > > Subject: RE: [net-next v2 1/1] virtual-bus: Implementation of > > > > Virtual Bus > > > > > > > > Hi Jeff, > > > > > > > > > From: Jeff Kirsher <jeffrey.t.kirsher@xxxxxxxxx> > > > > > Sent: Friday, November 15, 2019 4:34 PM > > > > > > > > > > From: Dave Ertman <david.m.ertman@xxxxxxxxx> > > > > > > > > > > This is the initial implementation of the Virtual Bus, > > > > > virtbus_device and virtbus_driver. The virtual bus is a > > > > > software based bus intended to support lightweight devices and > > > > > drivers and provide matching between them and probing of the > registered drivers. > > > > > > > > > > The primary purpose of the virual bus is to provide matching > > > > > services and to pass the data pointer contained in the > > > > > virtbus_device to the virtbus_driver during its probe call. > > > > > This will allow two separate kernel objects to match up and > > > > > start > > > communication. > > > > > > > > > It is fundamental to know that rdma device created by > > > > virtbus_driver will be anchored to which bus for an non abusive use. > > > > virtbus or parent pci bus? > > > > I asked this question in v1 version of this patch. > > > > > > The model we will be using is a PCI LAN driver that will allocate > > > and register a virtbus_device. The virtbus_device will be anchored > > > to the virtual bus, not the PCI bus. > > o.k. > > > > > > > > The virtbus does not have a requirement that elements registering > > > with it have any association with another outside bus or device. > > > > > This is what I want to capture in cover letter and documentation. > > > > > RDMA is not attached to any bus when it's init is called. The > > > virtbus_driver that it will create will be attached to the virtual bus. > > > > > > The RDMA driver will register a virtbus_driver object. Its probe > > > will accept the data pointer from the virtbus_device that the PCI > > > LAN driver > > created. > > > > > What I am saying that RDMA device created by the irdma driver or > > mlx5_ib driver should be anchored to the PCI device and not the virtbus > device. > > > > struct ib_device.dev.parent = &pci_dev->dev; > > > > if this is not done, and if it is, > > > > struct ib_device.dev.parent = &virtbus_dev->dev; > > > > Than we are inviting huge burden as below. > > (a) user compatibility with several tools, orchestration etc is > > broken, because rdma cannot be reached back to its PCI device as before. > > This is some internal kernel change for 'better code handling', which > > is surfacing to rdma device name changing - systemd/udev broken, until > > all distros upgrade and implement this virtbus naming scheme. > > Even with that orchestration tools shipped outside of distro are broken. > > > > (b) virtbus must extend iommu support in intel, arm, amd, ppc systems > > otherwise straight away rdma is broken in those environments with this > > 'internal code restructure'. > > These iommu doesn't support non PCI buses. > > > > (c) anchoring on virtbus brings challenge to get unique id for > > persistent naming when irdma/mlx5_ib device is not created by 'user'. > > > > This improvement by bus matching service != 'ethX to ens2f0 > > improvement of netdevs happened few years back'. > > Hence, my input is, > > > > irdma_virtubus_probe() { > > struct ib_device.dev.parent = &pci_dev->dev; > > ib_register_driver(); > > } > > > With this, I forgot to mention that, virtbus doesn't need PM callbacks, > because PM core layer works on suspend/resume devices in reverse order > of their creation. > Given that protocol devices (like rdma and netdev) devices shouldn't be > anchored on virtbus, it doesn't need PM callbacks. > Please remove them. > > suspend() will be called first on rdma device (because it was created last). This is only true in the rdma/PLCI LAN situation. virtbus can be used on two kernel objects that have no connection to another bus or device, but only use the virtbus for connecting up. In that case, those entities will need the PM callbacks. -Dave E