On Wed, Nov 20, 2024 at 02:17:46PM +0100, Eric Auger wrote: > > Yeah, I wasn't really suggesting to literally hook into this exact > > case; it was more just a general observation that if VFIO already has > > one justification for tinkering with pci_write_msi_msg() directly > > without going through the msi_domain layer, then adding another > > (wherever it fits best) can't be *entirely* unreasonable. I'm not sure that we can assume VFIO is the only thing touching the interrupt programming. I think there is a KVM path, and also the /proc/ path that will change the MSI affinity on the fly for a VFIO created IRQ. If the platform requires a MSI update to do this (ie encoding affinity in the add/data, not using IRQ remapping HW) then we still need to ensure the correct MSI address is hooked in. > >> Is it possible to do this with the existing write_msi_msg callback on > >> the msi descriptor? For instance we could simply translate the msg > >> address and call pci_write_msi_msg() (while avoiding an infinite > >> recursion). Or maybe there should be an xlate_msi_msg callback we can > >> register. Or I suppose there might be a way to insert an irqchip that > >> does the translation on write. Thanks, > > > > I'm far from keen on the idea, but if there really is an appetite for > > more indirection, then I guess the least-worst option would be yet > > another type of iommu_dma_cookie to work via the existing > > iommu_dma_compose_msi_msg() flow, For this direction I think I would turn iommu_dma_compose_msi_msg() into a function pointer stored in the iommu_domain and have vfio/iommufd provide its own implementation. The thing that is in control of the domain's translation should be providing the msi_msg. > > update per-device addresses direcitly. But then it's still going to > > need some kind of "layering violation" for VFIO to poke the IRQ layer > > into re-composing and re-writing a message whenever userspace feels > > like changing an address I think we'd need to get into the affinity update path and force a MSI write as well, even if the platform isn't changing the MSI for affinity. Processing a vMSI entry update would be two steps where we update the MSI addr in VFIO and then set the affinity. > for the record, the first integration was based on such distinct > iommu_dma_cookie > > [PATCH v15 00/12] SMMUv3 Nested Stage Setup (IOMMU part) <https://lore.kernel.org/all/20210411111228.14386-1-eric.auger@xxxxxxxxxx/#r>, patches 8 - 11 There are some significant differences from that series with this idea: - We want to maintain a per-MSI-index/per-device lookup table. It is not just a simple cookie, the msi_desc->dev && msi_desc->msi_index have to be matched against what userspace provides in the per-vMSI IOCTL - There would be no implicit progamming of the stage 2, this will be done directly in userspace by creating an IOAS area for the ITS page - It shouldn't have any sort of dynamic allocation behavior. It is an error for the kernel to ask for an msi_desc that userspace hasn't provided a mapping for - It should work well with nested and non-nested domains Jason