On Wed, Jan 29, 2025 at 06:46:20PM +0100, Eric Auger wrote: > >>> This missing peice is cleaning up the ITS mapping to allow for > >>> multiple ITS pages. I've imagined that kvm would someone give iommufd > >>> a FD that holds the specific ITS pages instead of the > >>> IOMMU_OPTION_SW_MSI_START/SIZE flow. > >> That's what I don't get: at the moment you only pass the gIOVA. With > >> technique 2, how can you build the nested mapping, ie. > >> > >> S1 S2 > >> gIOVA -> gDB -> hDB > >> > >> without passing the full gIOVA/gDB S1 mapping to the host? > > The nested S2 mapping is already setup before the VM boots: > > > > - The VMM puts the ITS page (hDB) into the S2 at a fixed address (gDB) > Ah OK. Your gDB has nothing to do with the actual S1 guest gDB, > right? I'm not totally sure what you mean by gDB? The above diagram suggests it is the ITS page address in the S2? Ie the guest physical address of the ITS. Within the VM, when it goes to call iommu_dma_prepare_msi(), it will provide the gDB adress as the phys_addr_t msi_addr. This happens because the GIC driver will have been informed of the ITS page at the gDB address, and it will use iommu_dma_prepare_msi(). Exactly the same as bare metal. > It is computed in iommufd_sw_msi_get_map() from the sw_msi_start pool. > Is that correct? Yes, for a single ITS page it will reliably be put at sw_msi_start. Since the VMM can provide sw_msi_start through the OPTION, the VMM can place the ITS page where it wants and then program the ACPI to tell the VM to call iommu_dma_prepare_msi(). (don't use this flow, it doesn't work for multi ITS, for testing only) > https://lore.kernel.org/all/20210411111228.14386-9-eric.auger@xxxxxxxxxx/ > I was passing both the gIOVA and the "true" gDB Eric If I understand this right, it still had the hypervisor dynamically setting up the S2, here it is pre-set and static? Jason