On Wed, Nov 17, 2021 at 04:15:36PM -0600, Bjorn Helgaas wrote: > > Agreed though how it all gets tied together isn't totally clear > > to me yet. The messy bit is interrupts given I don't think we have > > a model for enabling those anywhere other than in individual PCI drivers. > > Ah. Yeah, that is a little messy. The only real precedent where the > PCI core and a driver might need to coordinate on interrupts is the > portdrv. So far we've pretended that bridges do not have > device-specific functionality that might require interrupts. I don't > think that's actually true, but we haven't integrated drivers for the > tuning, performance monitoring, and similar features that bridges may > have. Yet. And portdrv really is conceptually part of the core PCI core, and should eventually be fully integrated.. > In any case, I think the argument that DOE capabilities are not > CXL-specific still holds. Agreed. > Oh, right, of course. A hint here that MSI/MSI-X depends on bus > mastering would save me the trouble. > > I wonder if the infrastructure, e.g., something inside > pci_alloc_irq_vectors_affinity() should do this for us. The > connection is "obvious" but not mentioned in > Documentation/PCI/msi-howto.rst and I'm not sure how callers that > supply PCI_IRQ_ALL_TYPES would know whether they got a single MSI > vector (which requires bus mastering) or an INTx vector (which does > not). As a minimum step we should document that this. That being said I don't tink we can just make the interrupt API call pci_set_master as there might be strange ordering requirements in the drivers. > > > So we get an auxiliary device for every instance of a DOE > > > capability? I think the commit log should mention something about > > > how many are created (e.g., "one per DOE capability"), how they > > > are named, whether they appear in sysfs, how drivers bind to them, > > > etc. > > > > > > I assume there needs to be some coordination between possible > > > multiple users of a DOE capability? How does that work? > > > > The DOE handling implementation makes everything synchronous - so if > > multiple users each may have to wait on queueing their query / > > responses exchanges. > > > > The fun of non OS software accessing these is still an open > > question. > > Sounds like something that potentially could be wrapped up in a safe > but slow interface that could be usable by others, including lspci? I guess we have to. I think this interface is a nightmare. Why o why does the PCI SGI keep doing these stupid things (see also VPDs).