> From: Jason Gunthorpe <jgg@xxxxxxxxxx> > Sent: Monday, November 2, 2020 9:22 PM > > On Fri, Oct 30, 2020 at 03:49:22PM -0700, Dave Jiang wrote: > > > > > > On 10/30/2020 3:45 PM, Jason Gunthorpe wrote: > > > On Fri, Oct 30, 2020 at 02:20:03PM -0700, Dave Jiang wrote: > > > > So the intel-iommu driver checks for the SIOV cap. And the idxd driver > > > > checks for SIOV and IMS cap. There will be other upcoming drivers that > will > > > > check for such cap too. It is Intel vendor specific right now, but SIOV is > > > > public and other vendors may implement to the spec. Is there a good > place to > > > > put the common capability check for that? > > > > > > I'm still really unhappy with these SIOV caps. It was explained this > > > is just a hack to make up for pci_ims_array_create_msi_irq_domain() > > > succeeding in VM cases when it doesn't actually work. > > > > > > Someday this is likely to get fixed, so tying platform behavior to PCI > > > caps is completely wrong. > > > > > > This needs to be solved in the platform code, > > > pci_ims_array_create_msi_irq_domain() should not succeed in these > > > cases. > > > > That sounds reasonable. Are you asking that the IMS cap check should gate > > the success/failure of pci_ims_array_create_msi_irq_domain() rather than > the > > driver? > > There shouldn't be an IMS cap at all > > As I understand, the problem here is the only way to establish new > VT-d IRQ routing is by trapping and emulating MSI/MSI-X related > activities and triggering routing of the vectors into the guest. > > There is a missing hypercall to allow the guest to do this on its own, > presumably it will someday be fixed so IMS can work in guests. Hypercall is VMM specific, while IMS cap provides a VMM-agnostic interface so any guest driver (if following the spec) can seamlessly work on all hypervisors. Thanks Kevin