On Tue, Jun 08, 2021 at 04:31:50PM +1000, David Gibson wrote: > For the qemu case, I would imagine a two stage fallback: > > 1) Ask for the exact IOMMU capabilities (including pagetable > format) that the vIOMMU has. If the host can supply, you're > good > > 2) If not, ask for a kernel managed IOAS. Verify that it can map > all the IOVA ranges the guest vIOMMU needs, and has an equal or > smaller pagesize than the guest vIOMMU presents. If so, > software emulate the vIOMMU by shadowing guest io pagetable > updates into the kernel managed IOAS. > > 3) You're out of luck, don't start. > > For both (1) and (2) I'd expect it to be asking this question *after* > saying what devices are attached to the IOAS, based on the virtual > hardware configuration. That doesn't cover hotplug, of course, for > that you have to just fail the hotplug if the new device isn't > supportable with the IOAS you already have. Yes. So there is a point in time when the IOAS is frozen, and cannot take in new incompatible devices. I think that can support the usage I had in mind. If the VMM (non-QEMU, let's say) wanted to create one IOASID FD per feature set it could bind the first device, freeze the features, then bind the second device. If the second bind fails it creates a new FD, allowing to fall back to (2) for the second device while keeping (1) for the first device. A paravirtual IOMMU like virtio-iommu could easily support this as it describes pIOMMU properties for each device to the guest. An emulated vIOMMU could also support some hybrid cases as you describe below. > One can imagine optimizations where for certain intermediate cases you > could do a lighter SW emu if the host supports a model that's close to > the vIOMMU one, and you're able to trap and emulate the differences. > In practice I doubt anyone's going to have time to look for such cases > and implement the logic for it. > > > For example depending whether the hardware IOMMU is SMMUv2 or SMMUv3, that > > completely changes the capabilities offered to the guest (some v2 > > implementations support nesting page tables, but never PASID nor PRI > > unlike v3.) The same vIOMMU could support either, presenting different > > capabilities to the guest, even multiple page table formats if we wanted > > to be exhaustive (SMMUv2 supports the older 32-bit descriptor), but it > > needs to know early on what the hardware is precisely. Then some new page > > table format shows up and, although the vIOMMU can support that in > > addition to older ones, QEMU will have to pick a single one, that it > > assumes the guest knows how to drive? > > > > I think once it binds a device to an IOASID fd, QEMU will want to probe > > what hardware features are available before going further with the vIOMMU > > setup (is there PASID, PRI, which page table formats are supported, > > address size, page granule, etc). Obtaining precise information about the > > hardware would be less awkward than trying different configurations until > > one succeeds. Binding an additional device would then fail if its pIOMMU > > doesn't support exactly the features supported for the first device, > > because we don't know which ones the guest will choose. QEMU will have to > > open a new IOASID fd for that device. > > No, this fundamentally misunderstands the qemu model. The user > *chooses* the guest visible platform, and qemu supplies it or fails. > There is no negotiation with the guest, because this makes managing > migration impossibly difficult. I'd like to understand better where the difficulty lies, with migration. Is the problem, once we have a guest running on physical machine A, to make sure that physical machine B supports the same IOMMU properties before migrating the VM over to B? Why can't QEMU (instead of the user) select a feature set on machine A, then when time comes to migrate, query all information from the host kernel on machine B and check that it matches what was picked for machine A? Or is it only trying to accommodate different sets of features between A and B, that would be too difficult? Thanks, Jean > > -cpu host is an exception, which is used because it is so useful, but > it's kind of a pain on the qemu side. Virt management systems like > oVirt/RHV almost universally *do not use* -cpu host, precisely because > it cannot support predictable migration. > > -- > David Gibson | I'll have my music baroque, and my code > david AT gibson.dropbear.id.au | minimalist, thank you. NOT _the_ _other_ > | _way_ _around_! > http://www.ozlabs.org/~dgibson