On Thu, Mar 25, 2021 at 02:36:46PM -0300, Jason Gunthorpe wrote: > On Thu, Mar 25, 2021 at 12:21:44PM -0500, Bjorn Helgaas wrote: > > > NVMe and mlx5 have basically identical functionality in this respect. > > Other devices and vendors will likely implement similar functionality. > > It would be ideal if we had an interface generic enough to support > > them all. > > > > Is the mlx5 interface proposed here sufficient to support the NVMe > > model? I think it's close, but not quite, because the the NVMe > > "offline" state isn't explicitly visible in the mlx5 model. > > I thought Keith basically said "offline" wasn't really useful as a > distinct idea. It is an artifact of nvme being a standards body > divorced from the operating system. > > In linux offline and no driver attached are the same thing, you'd > never want an API to make a nvme device with a driver attached offline > because it would break the driver. I think the sticky part is that Linux driver attach is not visible to the hardware device, while the NVMe "offline" state *is*. An NVMe PF can only assign resources to a VF when the VF is offline, and the VF is only usable when it is online. For NVMe, software must ask the PF to make those online/offline transitions via Secondary Controller Offline and Secondary Controller Online commands [1]. How would this be integrated into this sysfs interface? > So I think it is good as is (well one of the 8 versions anyhow). > > Keith didn't go into detail why the queue allocations in nvme were any > different than the queue allocations in mlx5. I expect they can > probably work the same where the # of interrupts is an upper bound on > the # of CPUs that can get queues and the device, once instantiated, > could be configured for the number of queues to actually operate, if > it wants. I don't really care about the queue allocations. I don't think we need to solve those here; we just need to make sure that what we do here doesn't preclude NVMe queue allocations. Bjorn [1] NVMe 1.4a, sec 5.22