On Fri, Mar 26, 2021 at 10:08 AM Bjorn Helgaas <helgaas@xxxxxxxxxx> wrote: > > On Fri, Mar 26, 2021 at 09:00:50AM -0700, Alexander Duyck wrote: > > On Thu, Mar 25, 2021 at 11:44 PM Leon Romanovsky <leon@xxxxxxxxxx> wrote: > > > On Thu, Mar 25, 2021 at 03:28:36PM -0300, Jason Gunthorpe wrote: > > > > On Thu, Mar 25, 2021 at 01:20:21PM -0500, Bjorn Helgaas wrote: > > > > > On Thu, Mar 25, 2021 at 02:36:46PM -0300, Jason Gunthorpe wrote: > > > > > > On Thu, Mar 25, 2021 at 12:21:44PM -0500, Bjorn Helgaas wrote: > > > > > > > > > > > > > NVMe and mlx5 have basically identical functionality in this respect. > > > > > > > Other devices and vendors will likely implement similar functionality. > > > > > > > It would be ideal if we had an interface generic enough to support > > > > > > > them all. > > > > > > > > > > > > > > Is the mlx5 interface proposed here sufficient to support the NVMe > > > > > > > model? I think it's close, but not quite, because the the NVMe > > > > > > > "offline" state isn't explicitly visible in the mlx5 model. > > > > > > > > > > > > I thought Keith basically said "offline" wasn't really useful as a > > > > > > distinct idea. It is an artifact of nvme being a standards body > > > > > > divorced from the operating system. > > > > > > > > > > > > In linux offline and no driver attached are the same thing, you'd > > > > > > never want an API to make a nvme device with a driver attached offline > > > > > > because it would break the driver. > > > > > > > > > > I think the sticky part is that Linux driver attach is not visible to > > > > > the hardware device, while the NVMe "offline" state *is*. An NVMe PF > > > > > can only assign resources to a VF when the VF is offline, and the VF > > > > > is only usable when it is online. > > > > > > > > > > For NVMe, software must ask the PF to make those online/offline > > > > > transitions via Secondary Controller Offline and Secondary Controller > > > > > Online commands [1]. How would this be integrated into this sysfs > > > > > interface? > > > > > > > > Either the NVMe PF driver tracks the driver attach state using a bus > > > > notifier and mirrors it to the offline state, or it simply > > > > offline/onlines as part of the sequence to program the MSI change. > > > > > > > > I don't see why we need any additional modeling of this behavior. > > > > > > > > What would be the point of onlining a device without a driver? > > > > > > Agree, we should remember that we are talking about Linux kernel model > > > and implementation, where _no_driver_ means _offline_. > > > > The only means you have of guaranteeing the driver is "offline" is by > > holding on the device lock and checking it. So it is only really > > useful for one operation and then you have to release the lock. The > > idea behind having an "offline" state would be to allow you to > > aggregate multiple potential operations into a single change. > > > > For example you would place the device offline, then change > > interrupts, and then queues, and then you could online it again. The > > kernel code could have something in place to prevent driver load on > > "offline" devices. What it gives you is more of a transactional model > > versus what you have right now which is more of a concurrent model. > > Thanks, Alex. Leon currently does enforce the "offline" situation by > holding the VF device lock while checking that it has no driver and > asking the PF to do the assignment. I agree this is only useful for a > single operation. Would the current series *prevent* a transactional > model from being added later if it turns out to be useful? I think I > can imagine keeping the same sysfs files but changing the > implementation to check for the VF being offline, while adding > something new to control online/offline. My concern would be that we are defining the user space interface. Once we have this working as a single operation I could see us having to support it that way going forward as somebody will script something not expecting an "offline" sysfs file, and the complaint would be that we are breaking userspace if we require the use of an "offline" file. So my preference would be to just do it that way now rather than wait as the behavior will be grandfathered in once we allow the operation without it. > I also want to resurrect your idea of associating > "sriov_vf_msix_count" with the PF instead of the VF. I really like > that idea, and it better reflects the way both mlx5 and NVMe work. I > don't think there was a major objection to it, but the discussion > seems to have petered out after your suggestion of putting the PCI > bus/device/funcion in the filename, which I also like [1]. > > Leon has implemented a ton of variations, but I don't think having all > the files in the PF directory was one of them. > > Bjorn > > [1] https://lore.kernel.org/r/CAKgT0Ue363fZEwqGUa1UAAYotUYH8QpEADW1U5yfNS7XkOLx0Q@xxxxxxxxxxxxxx I almost wonder if it wouldn't make sense to just partition this up to handle flexible resources in the future. Maybe something like having the directory setup such that you have "sriov_resources/msix/" and then you could have individual files with one for the total and the rest with the VF BDF naming scheme. Then if we have to, we could add other subdirectories in the future to handle things like queues in the future.