On 2023/7/24 17:18, Jonathan Cameron wrote: > Really a question for Bjorn I think, but here is my 2 cents... > > The problem here is that we need to do that fundamental redesign of the > way the PCI ports drivers work. I'm not sure there is a path to merging > this until that is done. The bigger problem is that I'm not sure anyone > is actively looking at that yet. I'd like to look at this (as I have > the same problem for some other drivers), but it is behind various > other things on my todo list. > > Bjorn might be persuaded on a temporary solution, but that would come > with some maintenance problems, particularly when we try to do it > 'right' in the future. Maybe adding another service driver would be > a stop gap as long as we know we won't keep doing so for ever. Not sure. Thank you for your reply, and got your point, :) + Bjorn >>>> The approach used here is to separately walk the PCI topology and >>>> register the devices. It can 'maybe' get away with that because no >>>> interrupts and I assume resets have no nasty impacts on it because >>>> the device is fairly simple. In general that's not going to work. >>>> CXL does a similar trick (which I don't much like, but too late >>>> now), but we've also run into the problem of how to get interrupts >>>> if not the main driver. >>> >>> Yes, this is a real problem. I think the "walk all PCI devices >>> looking for one we like" approach is terrible because it breaks a lot >>> of driver model assumptions (no device ID to autoload module via udev, >>> hotplug doesn't work, etc), but we don't have a good alternative right >>> now. >>> >>> I think portdrv is slightly better because at least it claims the >>> device in the usual way and gives a way for service drivers to >>> register with it. But I don't really like that either because it >>> created a new weird /sys/bus/pci_express hierarchy full of these >>> sub-devices that aren't really devices, and it doesn't solve the >>> module load and hotplug issues. >>> >>> I would like to have portdrv be completely built into the PCI core and >>> not claim Root Ports or Switch Ports. Then those devices would be >>> available via the usual driver model for driver loading and binding >>> and for hotplug. >> >> Let me see if I understand this correctly as I can think of a few options >> that perhaps are inline with what you are thinking. >> >> 1) All the portdrv stuff converted to normal PCI core helper functions >> that a driver bound to the struct pci_dev can use. >> 2) Driver core itself provides a bunch of extra devices alongside the >> struct pci_dev one to which additional drivers can bind? - so kind >> of portdrv handling, but squashed into the PCI device topology? >> 3) Have portdrv operated under the hood, so all the services etc that >> it provides don't require a driver to be bound at all. Then >> allow usual VID/DID based driver binding. >> >> If 1 - we are going to run into class device restrictions and that will >> just move where we have to handle the potential vendor specific parts. >> We probably don't want that to be a hydra with all the functionality >> and lookups etc driven from there, so do we end up with sub devices >> of that new PCI port driver with a discover method based on either >> vsec + VID or DVSEC with devices created under the main pci_dev. >> That would have to include nastiness around interrupt discovery for >> those sub devices. So ends up roughly like port_drv. >> >> I don't think 2 solves anything. >> >> For 3 - interrupts and ownership of facilities is going to be tricky >> as initially those need to be owned by the PCI core (no device driver bound) >> and then I guess handed off to the driver once it shows up? Maybe that >> driver should call a pci_claim_port() that gives it control of everything >> and pci_release_port() that hands it all back to the core. That seems >> racey. > > Yes, 3 is the option I want to explore. That's what we already do for > things like ASPM. Agreed, interrupts is a potential issue. I think > the architected parts of config space should be implicitly owned by > the PCI core, with interfaces à la pci_disable_link_state() if drivers > need them. > > Bjorn > https://lore.kernel.org/lkml/ZGUAWxoEngmqFcLJ@bhelgaas/ @Bjorn Is there a path to merging this patch set until your explore is done? And are you still actively looking at that yet? I am not quite familiar with PCI core, but I would like to help work on that. Thank you. Best Regards, Shuai